Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 9, September
Previous Issue
Volume 9, March
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

Robotics, Volume 9, Issue 2 (June 2020) – 29 articles

Cover Story (view full-size image): Wheeled-legged hexapod robots have a wide range of applications, such as surveillance, rescue, or hospital assistance. One of the key operation planning issues is related to robot balancing during motion. This paper proposes a practical technique for balancing wheeled-legged hexapod robots, where a Biodex Balance System device is used to obtain the actual position of the center of mass. Experimental tests are carried out to evaluate the effectiveness of this technique and to modify and improve the position of the hexapod robots’ center of mass. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
33 pages, 5807 KiB  
Article
Fast Approximation of Over-Determined Second-Order Linear Boundary Value Problems by Cubic and Quintic Spline Collocation
by Philipp Seiwald and Daniel J. Rixen
Robotics 2020, 9(2), 48; https://doi.org/10.3390/robotics9020048 - 25 Jun 2020
Cited by 2 | Viewed by 4683
Abstract
We present an efficient and generic algorithm for approximating second-order linear boundary value problems through spline collocation. In contrast to the majority of other approaches, our algorithm is designed for over-determined problems. These typically occur in control theory, where a system, e.g., a [...] Read more.
We present an efficient and generic algorithm for approximating second-order linear boundary value problems through spline collocation. In contrast to the majority of other approaches, our algorithm is designed for over-determined problems. These typically occur in control theory, where a system, e.g., a robot, should be transferred from a certain initial state to a desired target state while respecting characteristic system dynamics. Our method uses polynomials of maximum degree three/five as base functions and generates a cubic/quintic spline, which is C 2 / C 4 continuous and satisfies the underlying ordinary differential equation at user-defined collocation sites. Moreover, the approximation is forced to fulfill an over-determined set of two-point boundary conditions, which are specified by the given control problem. The algorithm is suitable for time-critical applications, where accuracy only plays a secondary role. For consistent boundary conditions, we experimentally validate convergence towards the analytic solution, while for inconsistent boundary conditions our algorithm is still able to find a “reasonable” approximation. However, to avoid divergence, collocation sites have to be appropriately chosen. The proposed scheme is evaluated experimentally through comparison with the analytical solution of a simple test system. Furthermore, a fully documented C++ implementation with unit tests as example applications is provided. Full article
Show Figures

Figure 1

Figure 1
<p>Humanoid robot LOLA developed at the Lehrstuhl für Angewandte Mechanik, Technical University of Munich (TUM). The proposed algorithm is used within the walking pattern generation framework of LOLA, see [<a href="#B13-robotics-09-00048" class="html-bibr">13</a>] for details. The robot is <math display="inline"><semantics> <mrow> <mn>1.8</mn> </mrow> </semantics></math> <math display="inline"><semantics> <mi mathvariant="normal">m</mi> </semantics></math> tall and weights about 60 <math display="inline"><semantics> <mrow> <mi mathvariant="normal">k</mi> <mi mathvariant="normal">g</mi> </mrow> </semantics></math>. Left: photo and kinematic configuration of the system with 24 actuated degrees of freedom. Right: simplified model of multi-body dynamics with torso mass <math display="inline"><semantics> <msub> <mi>m</mi> <mi>t</mi> </msub> </semantics></math>, foot mass <math display="inline"><semantics> <msub> <mi>m</mi> <mi>f</mi> </msub> </semantics></math> and torso mass moment of inertia <math display="inline"><semantics> <msub> <mo>Θ</mo> <mi>t</mi> </msub> </semantics></math> which (together with the ground reaction forces/torques) contribute to the ordinary differential equation (ODE) describing the center of mass (CoM) dynamics (blue).</p>
Full article ">Figure 2
<p>Visual interpretation of the over-determined boundary value problem (BVP): the start and end pose (<b>left</b>/<b>right</b>) represent the boundary conditions, while the intermediate motion (<b>transparent</b>) tries to approximate the inherent system dynamics of the simplified model.</p>
Full article ">Figure 3
<p>Segmentation and parametrization of the investigated spline <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>. The spline consists of <span class="html-italic">n</span> interconnected segments, which share the interior knots with their neighbors. Each segment is described through the local interpolation parameter <math display="inline"><semantics> <mrow> <msub> <mi>ξ</mi> <mi>i</mi> </msub> <mo>∈</mo> <mrow> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mspace width="0.166667em"/> <mn>1</mn> <mo>]</mo> </mrow> </mrow> </semantics></math> (blue).</p>
Full article ">Figure 4
<p>The computed spline <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> (black) approximating the real solution <math display="inline"><semantics> <mrow> <mi>F</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> (green). The approximation satisfies the underlying ODE at the specified collocation sites <math display="inline"><semantics> <mrow> <mo>{</mo> <msub> <mi>t</mi> <mi>k</mi> </msub> <mo>}</mo> </mrow> </semantics></math> (blue), and fulfills the boundary conditions (BC) at <math display="inline"><semantics> <msub> <mi>t</mi> <mn>0</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>t</mi> <mi>n</mi> </msub> </semantics></math> (orange), but do not necessarily coincide at the collocation points.</p>
Full article ">Figure 5
<p>Left: mass-spring-damper system used for validation. Right: analytical solution for the underdamped case (<math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>), the overdamped case (<math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>) and the critically damped case (<math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>25</mn> </mrow> </semantics></math>). The solution is plotted for the initial conditions <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>F</mi> <mo>˙</mo> </mover> <mn>0</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Consistent BCs: convergence of the approximation <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> (blue and green) towards the analytic solution <math display="inline"><semantics> <mrow> <mi>F</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> (black, dashed) for <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mspace width="0.166667em"/> <mo>⋯</mo> <mo>,</mo> <mspace width="0.166667em"/> <mn>9</mn> <mo>,</mo> <mspace width="0.166667em"/> <mn>30</mn> <mo>,</mo> <mspace width="0.166667em"/> <mn>50</mn> <mo>,</mo> <mspace width="0.166667em"/> <mn>70</mn> <mo>,</mo> <mspace width="0.166667em"/> <mn>100</mn> </mrow> </semantics></math>. The top row belongs to the underdamped case while the bottom row represents the overdamped case. From left to right: approximation with cubic spline without virtual control points (<b>left</b>), cubic spline with virtual control points (<b>center</b>) and quintic spline (<b>right</b>). The corresponding best approximation <math display="inline"><semantics> <mrow> <msub> <mi>ν</mi> <mi>max</mi> </msub> </mrow> </semantics></math> is drawn in bold blue.</p>
Full article ">Figure 7
<p>Root mean square (RMS) of error <math display="inline"><semantics> <mrow> <mi>e</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> and residual <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> as defined in (<a href="#FD65-robotics-09-00048" class="html-disp-formula">65</a>) and (<a href="#FD66-robotics-09-00048" class="html-disp-formula">66</a>). The left subscript differs between cubic <span class="html-italic">C</span> and quintic <span class="html-italic">Q</span> spline collocation. Moreover, for cubic spline collocation, we identify the variants without virtual control points (i.e., free first-order boundaries <math display="inline"><semantics> <msub> <mover accent="true"> <mi>y</mi> <mo>˙</mo> </mover> <mn>0</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mover accent="true"> <mi>y</mi> <mo>˙</mo> </mover> <mi>n</mi> </msub> </semantics></math>) and with virtual control points by the right subscript <math display="inline"><semantics> <mrow> <mi>f</mi> <mi>r</mi> <mi>e</mi> <mi>e</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>v</mi> <mi>i</mi> <mi>r</mi> <mi>t</mi> </mrow> </semantics></math>, respectively. The left plot belongs to the case of consistent BCs while the right one was obtained using inconsistent BCs. Note that for inconsistent BCs an analytic solution does not exist, thus the error <math display="inline"><semantics> <mrow> <mi>e</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> is not defined and we consider only the residual <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Residual <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> as defined in (<a href="#FD66-robotics-09-00048" class="html-disp-formula">66</a>) for consistent (left) and inconsistent (right) BCs in the underdamped case. For best presentation, the count of collocation sites is chosen as <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math> such that the collocation sites (dots) are given by <math display="inline"><semantics> <mrow> <mrow> <mo>{</mo> <msub> <mi>t</mi> <mi>k</mi> </msub> <mo>}</mo> </mrow> <mo>=</mo> <mrow> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mspace width="0.166667em"/> <mn>2</mn> <mo>,</mo> <mspace width="0.166667em"/> <mn>3</mn> <mo>,</mo> <mspace width="0.166667em"/> <mn>4</mn> <mo>}</mo> </mrow> </mrow> </semantics></math>. For cubic spline collocation, the left subscript <span class="html-italic">C</span> and for the quintic counterpart <span class="html-italic">Q</span> is used. Moreover, the right subscripts <math display="inline"><semantics> <mrow> <mi>f</mi> <mi>r</mi> <mi>e</mi> <mi>e</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>v</mi> <mi>i</mi> <mi>r</mi> <mi>t</mi> </mrow> </semantics></math> are used to differentiate between the variants without and with virtual control points, respectively. The virtual control points are highlighted with circles.</p>
Full article ">Figure 9
<p>Inconsistent BCs: approximation <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> (blue, green, and orange) and reference system <math display="inline"><semantics> <mrow> <mi>F</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> (black, dashed) for <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mspace width="0.166667em"/> <mo>⋯</mo> <mo>,</mo> <mspace width="0.166667em"/> <mn>4</mn> <mo>,</mo> <mspace width="0.166667em"/> <msub> <mi>ν</mi> <mi>opt</mi> </msub> <mo>,</mo> <mspace width="0.166667em"/> <mn>13</mn> <mo>,</mo> <mspace width="0.166667em"/> <mn>15</mn> <mo>,</mo> <mspace width="0.166667em"/> <mn>17</mn> <mo>,</mo> <mspace width="0.166667em"/> <mn>20</mn> </mrow> </semantics></math>. The top row belongs to the underdamped case while the bottom row represents the overdamped case. From left to right: approximation with cubic spline (no virtual control points, left), cubic spline (with virtual control points, center) and quintic spline (right). The corresponding best approximation <math display="inline"><semantics> <mrow> <msub> <mi>ν</mi> <mi>opt</mi> </msub> </mrow> </semantics></math> is drawn in bold blue. Diverging approximations for <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>&gt;</mo> <msub> <mi>ν</mi> <mi>opt</mi> </msub> </mrow> </semantics></math> are colored orange.</p>
Full article ">Figure 10
<p>Left: (minimum) runtime <span class="html-italic">T</span> and condition <span class="html-italic">C</span> of <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">A</mi> <mi>coll</mi> </msub> </mrow> </semantics></math> for running 2 over count of collocation sites <math display="inline"><semantics> <mi>ν</mi> </semantics></math>. Right: root mean square (RMS) of residual <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> as defined in (<a href="#FD66-robotics-09-00048" class="html-disp-formula">66</a>) over (minimum) runtime <span class="html-italic">T</span>. The left subscript <span class="html-italic">C</span> and <span class="html-italic">Q</span> belong to the cubic or quintic spline version of the algorithm, respectively. Moreover, the right subscript indicates if cubic spline collocation was performed without (<math display="inline"><semantics> <mrow> <mi>free</mi> </mrow> </semantics></math>) or with (<math display="inline"><semantics> <mrow> <mi>virt</mi> </mrow> </semantics></math>) virtual control points. All measurements were performed using the underdamped parametrization and consistent BCs.</p>
Full article ">Figure 11
<p>Runtime (percentile) of relevant steps of 2 relative to total runtime for quintic spline collocation. Code sections with negligible execution time are not plotted and also not accounted for total runtime. The measurements were obtained by using an extended time horizon <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mi>n</mi> </msub> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math> and the parametrization <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> (underdamped) to allow a better representation of high counts of collocation sites of up to <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>300</mn> </mrow> </semantics></math>.</p>
Full article ">Figure A1
<p>Design of the classes <tt>CubicSplineCollocator</tt> and <tt>QuinticSplineCollocator</tt> in <tt>BROCCOLI</tt>. Left: class inheritance and segmentation of <tt>process()</tt> into subroutines. Right: proposed strategy for efficient parallelization in the case of a decoupled, multi-dimensional BVP. Note that the first and last subroutine (grayed out) are optional and perform a validity check of the given input parameters and convert the final result into a corresponding <tt>broccoli::curve::Trajectory</tt> data structure for convenient evaluation of the generated polynomial spline, respectively.</p>
Full article ">
18 pages, 4574 KiB  
Article
Multi-Robot Coverage and Persistent Monitoring in Sensing-Constrained Environments
by Tauhidul Alam and Leonardo Bobadilla
Robotics 2020, 9(2), 47; https://doi.org/10.3390/robotics9020047 - 23 Jun 2020
Cited by 10 | Viewed by 4588
Abstract
This article examines the problems of multi-robot coverage and persistent monitoring of regions of interest with limited sensing robots. A group of robots, each equipped with only contact sensors and a clock, execute a simple trajectory by repeatedly moving straight and then bouncing [...] Read more.
This article examines the problems of multi-robot coverage and persistent monitoring of regions of interest with limited sensing robots. A group of robots, each equipped with only contact sensors and a clock, execute a simple trajectory by repeatedly moving straight and then bouncing at perimeter boundaries by rotating in place. We introduce an approach by finding a joint trajectory for multiple robots to cover a given environment and generating cycles for the robots to persistently monitor the target regions in the environment. From a given initial configuration, our approach iteratively finds the joint trajectory of all the robots that covers the entire environment. Our approach also computes periodic trajectories of all the robots for monitoring of some regions, where trajectories overlap but do not involve robot-robot collisions. We present experimental results from multiple simulations and physical experiments demonstrating the practical utility of our approach. Full article
(This article belongs to the Section Sensors and Control in Robotics)
Show Figures

Figure 1

Figure 1
<p>An example scenario. An indoor environment, with multiple robots moving through the space.</p>
Full article ">Figure 2
<p>Simple bouncing strategy. (<b>a</b>) A robot rotates counterclockwise by the bouncing angle <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math> with respect to its current direction while it bounces off of the boundary of the environment. (<b>b</b>) When two robots collide with each other (only in the multi-robot coverage task), they then turn counterclockwise with their bouncing angles <math display="inline"><semantics> <msub> <mi>ϕ</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>ϕ</mi> <mn>2</mn> </msub> </semantics></math> from their current moving orientations.</p>
Full article ">Figure 3
<p>Trajectory generation and neighbor selection. (<b>a</b>) A generated trajectory <math display="inline"><semantics> <mover accent="true"> <mi>x</mi> <mo>˜</mo> </mover> </semantics></math> from an initial configuration <math display="inline"><semantics> <msub> <mi>x</mi> <mn>0</mn> </msub> </semantics></math>. (<b>b</b>) The best nearest neighbor <math display="inline"><semantics> <msub> <mi>β</mi> <mi>l</mi> </msub> </semantics></math> (green square) among unprocessed neighbors of last cell <span class="html-italic">b</span> of the trajectory <math display="inline"><semantics> <mover accent="true"> <mi>x</mi> <mo>˜</mo> </mover> </semantics></math>.</p>
Full article ">Figure 4
<p>A joint trajectory of the robots connecting through the new neighboring cell <math display="inline"><semantics> <msub> <mi>β</mi> <mi>l</mi> </msub> </semantics></math>.</p>
Full article ">Figure 5
<p>Two cycle forming scenarios. (<b>a</b>) The same initial and ending cell <span class="html-italic">b</span>. (<b>b</b>) Different initial and ending cells.</p>
Full article ">Figure 6
<p>From the simulation to the physical implementation. (<b>a</b>) An artificial lab environment to cover. (<b>b</b>–<b>c</b>) A joint trajectory of the two robots in their configuration space and workspace respectively, generated from our simulation using the set of bouncing angles <math display="inline"><semantics> <mrow> <mo>Φ</mo> <mo>=</mo> <mo>{</mo> <msup> <mn>135</mn> <mo>∘</mo> </msup> <mo>}</mo> </mrow> </semantics></math>. The initial configuration of the two robots is depicted in (<b>b</b>) by blue and green circles for their locations and by colored arrowheads starting from these circles for their orientations. Their initial locations are also depicted in (<b>c</b>) by blue and green circles. There is only one robot-robot collision in the middle of the upper part of the environment. (<b>d</b>–<b>f</b>) Three different snapshots at different times of the physical experiment of the generated joint trajectory of two iRobot Create 2.0 robots controlled with two Arduinos.</p>
Full article ">Figure 7
<p>Simulation results of the multi-robot coverage. (<b>a</b>) The first simulation environment. (<b>b</b>) The comparison result of the number of steps required for the complete coverage of the first environment and the number of robots used.</p>
Full article ">Figure 8
<p>Simulation results of the multi-robot coverage. (<b>a</b>) The second simulation environment. (<b>b</b>) The comparison result of the number of steps required for the complete coverage of the second environment and the number of robots used.</p>
Full article ">Figure 9
<p>Simulation result of the multi-robot coverage for a larger simulation environment. (<b>a</b>) A random initial configuration of 10 robots in the environment representing their locations with the centers of the circles and orientations with colored arrowheads inside circles. (<b>b</b>) An environment coverage heatmap provided by the joint trajectory of 10 robots starting from their initial configuration.</p>
Full article ">Figure 10
<p>Simulation result. Overlapping and collision-free trajectories of the two robots in their configuration space in (<b>a</b>) and workspace in (<b>b</b>) respectively that monitor two regions in the given environment persistently starting from the encircled locations and using the set of bouncing angles <math display="inline"><semantics> <mrow> <mo>Φ</mo> <mo>=</mo> <mo>{</mo> <msup> <mn>45</mn> <mo>∘</mo> </msup> <mo>}</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>From the simulation to the physical implementation. (<b>a</b>) Another artificial lab environment. (<b>b</b>,<b>c</b>) Overlapping and collision-free trajectories of the two robots in their configuration space and workspace respectively, found from our simulation that monitor two regions in the given environment persistently starting from the encircled locations and using the set of bouncing angles <math display="inline"><semantics> <mrow> <mo>Φ</mo> <mspace width="3.33333pt"/> <mo>=</mo> <mspace width="3.33333pt"/> <mo>{</mo> <msup> <mn>90</mn> <mo>∘</mo> </msup> <mo>}</mo> </mrow> </semantics></math>. (<b>d</b>–<b>f</b>) Three different snapshots at different times of a distributed physical implementation of overlapping and collision-free trajectories of two iRobot Create 2.0 robots controlled with two Arduinos.</p>
Full article ">Figure 11 Cont.
<p>From the simulation to the physical implementation. (<b>a</b>) Another artificial lab environment. (<b>b</b>,<b>c</b>) Overlapping and collision-free trajectories of the two robots in their configuration space and workspace respectively, found from our simulation that monitor two regions in the given environment persistently starting from the encircled locations and using the set of bouncing angles <math display="inline"><semantics> <mrow> <mo>Φ</mo> <mspace width="3.33333pt"/> <mo>=</mo> <mspace width="3.33333pt"/> <mo>{</mo> <msup> <mn>90</mn> <mo>∘</mo> </msup> <mo>}</mo> </mrow> </semantics></math>. (<b>d</b>–<b>f</b>) Three different snapshots at different times of a distributed physical implementation of overlapping and collision-free trajectories of two iRobot Create 2.0 robots controlled with two Arduinos.</p>
Full article ">
19 pages, 23753 KiB  
Article
Simulation of an Autonomous Mobile Robot for LiDAR-Based In-Field Phenotyping and Navigation
by Jawad Iqbal, Rui Xu, Shangpeng Sun and Changying Li
Robotics 2020, 9(2), 46; https://doi.org/10.3390/robotics9020046 - 21 Jun 2020
Cited by 71 | Viewed by 20746
Abstract
The agriculture industry is in need of substantially increasing crop yield to meet growing global demand. Selective breeding programs can accelerate crop improvement but collecting phenotyping data is time- and labor-intensive because of the size of the research fields and the frequency of [...] Read more.
The agriculture industry is in need of substantially increasing crop yield to meet growing global demand. Selective breeding programs can accelerate crop improvement but collecting phenotyping data is time- and labor-intensive because of the size of the research fields and the frequency of the work required. Automation could be a promising tool to address this phenotyping bottleneck. This paper presents a Robotic Operating System (ROS)-based mobile field robot that simultaneously navigates through occluded crop rows and performs various phenotyping tasks, such as measuring plant volume and canopy height using a 2D LiDAR in a nodding configuration. The efficacy of the proposed 2D LiDAR configuration for phenotyping is assessed in a high-fidelity simulated agricultural environment in the Gazebo simulator with an ROS-based control framework and compared with standard LiDAR configurations used in agriculture. Using the proposed nodding LiDAR configuration, a strategy for navigation through occluded crop rows is presented. The proposed LiDAR configuration achieved an estimation error of 6.6% and 4% for plot volume and canopy height, respectively, which was comparable to the commonly used LiDAR configurations. The hybrid strategy with GPS waypoint following and LiDAR-based navigation was used to navigate the robot through an agricultural crop field successfully with an root mean squared error of 0.0778 m which was 0.2% of the total traveled distance. The presented robot simulation framework in ROS and optimized LiDAR configuration helped to expedite the development of the agricultural robots, which ultimately will aid in overcoming the phenotyping bottleneck. Full article
(This article belongs to the Section Agricultural and Field Robotics)
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) Simulated rover Phenotron in Gazebo and (<b>B</b>) Robot kinematics <math display="inline"><semantics> <msub> <mi>V</mi> <mi>L</mi> </msub> </semantics></math> = average velocity of left wheels, <math display="inline"><semantics> <msub> <mi>V</mi> <mi>R</mi> </msub> </semantics></math> = average velocity of right wheels, <math display="inline"><semantics> <mi>θ</mi> </semantics></math> = angle relative to X axis.</p>
Full article ">Figure 2
<p>(<b>A</b>) Dimensions of the cotton plot and (<b>B</b>) a section of ground floor.</p>
Full article ">Figure 3
<p>Test field for LiDAR-based phenotyping.</p>
Full article ">Figure 4
<p>LiDAR configurations: (<b>A</b>) Tilted, (<b>B</b>) Side, (<b>C</b>) Overhead, and (<b>D</b>) Nodding.</p>
Full article ">Figure 5
<p>Assembled LiDAR on uneven terrain data visualized within RVIZ using the Tilted configuration.</p>
Full article ">Figure 6
<p>LiDAR phenotyping pipeline to extract phenotypic traits. (<b>A</b>) A point cloud generated from the Nodding LiDAR configuration, (<b>B</b>) The PCD is split into the ground plane and segmented point cloud of a single plot, (<b>C</b>) The isolated ground plane is used as a datum for calculating height of segmented point cloud and convex hull is used just on the segmented point cloud for volume estimation.</p>
Full article ">Figure 7
<p>LiDAR crop row characterization strategy: (<b>A</b>) generated point cloud from actuated LiDAR, (<b>B</b>) downsampled and voxelized point cloud, (<b>C</b>) left and right split crop rows with radius outlier filter, (<b>D</b>) left and right crop row characterization using RANSAC.</p>
Full article ">Figure 8
<p>ROS node diagram.</p>
Full article ">Figure 9
<p>Control loop for crop row navigation.</p>
Full article ">Figure 10
<p>(<b>A</b>) Robot heading definition, <math display="inline"><semantics> <msub> <mi>d</mi> <mi>r</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>d</mi> <mi>l</mi> </msub> </semantics></math>: Distance from left and right crop row respectively, <math display="inline"><semantics> <msub> <mi>a</mi> <mi>c</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>a</mi> <mi>r</mi> </msub> </semantics></math>: angle of the crop rows and angle of the robot respectively; (<b>B</b>) navigation strategy, <math display="inline"><semantics> <msub> <mi>d</mi> <mrow> <mi>c</mi> <mi>r</mi> </mrow> </msub> </semantics></math> = distance between crop rows.</p>
Full article ">Figure 11
<p>Mean percentage error of height and volume phenotyping estimation from four LiDAR configurations. The errors bars indicate the standard deviation.</p>
Full article ">Figure 12
<p>Navigation strategy results: (<b>A</b>) straight crop rows and (<b>B</b>) angled crop rows.</p>
Full article ">Figure 13
<p>(<b>A</b>) Generated point cloud of four crop rows and (<b>B</b>) the simulation field for navigation tests.</p>
Full article ">
30 pages, 25522 KiB  
Article
Learning Sequential Force Interaction Skills
by Simon Manschitz, Michael Gienger, Jens Kober and Jan Peters
Robotics 2020, 9(2), 45; https://doi.org/10.3390/robotics9020045 - 17 Jun 2020
Cited by 10 | Viewed by 5303
Abstract
Learning skills from kinesthetic demonstrations is a promising way of minimizing the gap between human manipulation abilities and those of robots. We propose an approach to learn sequential force interaction skills from such demonstrations. The demonstrations are decomposed into a set of movement [...] Read more.
Learning skills from kinesthetic demonstrations is a promising way of minimizing the gap between human manipulation abilities and those of robots. We propose an approach to learn sequential force interaction skills from such demonstrations. The demonstrations are decomposed into a set of movement primitives by inferring the underlying sequential structure of the task. The decomposition is based on a novel probability distribution which we call Directional Normal Distribution. The distribution allows infering the movement primitive’s composition, i.e., its coordinate frames, control variables and target coordinates from the demonstrations. In addition, it permits determining an appropriate number of movement primitives for a task via model selection. After finding the task’s composition, the system learns to sequence the resulting movement primitives in order to be able to reproduce the task on a real robot. We evaluate the approach on three different tasks, unscrewing a light bulb, box stacking and box flipping. All tasks are kinesthetically demonstrated and then reproduced on a Barrett WAM robot. Full article
(This article belongs to the Special Issue Feature Papers 2020)
Show Figures

Figure 1

Figure 1
<p>Overview of our approach. From the kinesthetic demonstrations (<b>a</b>), we record the joint angles of the robot, the measurements of the force-torque sensor and the positions and orientations of all objects in the scene. The observations are then projected onto a set of predefined task-spaces. The task-spaces are split according to what they control (<b>b</b>). The resulting data is used to decompose the demonstrated task, yielding a set of MPs, and their activation probabilities <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">A</mi> <mrow> <mi>P</mi> <mo>/</mo> <mi>F</mi> </mrow> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">A</mi> <mrow> <mi>O</mi> <mo>/</mo> <mi>T</mi> </mrow> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">A</mi> <mi>H</mi> </msub> </semantics></math> over time. When executing a task on a robot, the system controls the current movement by activating and deactivating the MPs found by the task-decomposition (<b>d</b>). To learn a mapping from the state of the world (represented by features <math display="inline"><semantics> <mi mathvariant="bold-italic">F</mi> </semantics></math>) to the active MPs, we connect the MPs to a set of classifiers. Those classifiers are then trained to learn a mapping from the current feature state <math display="inline"><semantics> <mi mathvariant="bold-italic">f</mi> </semantics></math> to the MP activations for the next time step <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">a</mi> <mrow> <mi>P</mi> <mo>/</mo> <mi>F</mi> </mrow> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">a</mi> <mrow> <mi>O</mi> <mo>/</mo> <mi>T</mi> </mrow> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">a</mi> <mi>H</mi> </msub> </semantics></math> (<b>c</b>). Please note that when reproducing the task on a real robot, the features do not come from the demonstrations but are directly computed from the current sensor data.</p>
Full article ">Figure 2
<p>Overview of the presented task-decomposition approach using a simple 1D toy example. Plot (<b>a</b>) shows two demonstrations (black and gray) in two different coordinate frames. Our approach first segments the data by finding zero-velocity crossings (ZVCs, Flanagan et al. [<a href="#B44-robotics-09-00045" class="html-bibr">44</a>]) and contact changes (<b>b</b>). The different segments are illustrated using different colors. Subsequently, the segments are clustered by using a Hidden Markov Model (HMM) (<b>c</b>). Here, we use mixtures of Directional Normal Distributions (DNDs) as state emissions of the HMM. DNDs allow for clustering the segments based on their convergence properties. If two segments are converging to the same attractor in one of the task-spaces corresponding to the coordinate frames, they are more likely assigned to the same cluster. After training, a MP is defined for each resulting cluster, and its coordinate frame, control variables and attractor target can be inferred from the parameters of the cluster. The attractor goals of the MPs are marked in the plot and the uncertainty about their position is shown with ellipsoids. The goals are only shown in the most likely coordinate frame of the MP. For instance, the yellow MP will be position-controlled in the second frame (with a certainty of 92%), as the target force is close to zero and the segments assigned to the MP converge best in this frame.</p>
Full article ">Figure 3
<p>Illustration of the EM algorithm. The black arrows correspond to data points and their velocity vectors. The data points are drawn uniformly from the shown grid. The velocity vectors are determined by drawing points from the black normal distribution and scaling the difference vector to the data points to a fixed value. The EM algorithm iteratively finds a target on the grid that fits best to all data points and the corresponding velocity vectors. The red ellipsoid shows the initial and current guesses for the target. In the E-step of the algorithm, for each data point a projection along the velocity vector is computed that fits best to the current estimate of the target (blue dashed lines). In the M-step, a new estimate for the distribution parameter is found.</p>
Full article ">Figure 4
<p>Overview of the sequence learning step. The three HMMs (<b>a</b>) resulting from the task- decomposition are used for labeling the demonstrations with the most likely MPs (different colors) over time (<b>b</b>). The labeled demonstrations are compiled into a single sequence graph representing potential MP orders (<b>c</b>). Local classifiers are responsible for transitioning between successive MPs when executing the skill on a real robot. The classifiers learns to discriminate the individual classes in feature space (state of robot and the world). The dashed lines indicate transition points between the classes, where a transition is supposed to be triggered. Please note that the features in the plot are only illustrative and have no further meaning.</p>
Full article ">Figure 5
<p>Illustration of teaching and reproduction of the box flipping task. The task was to initially move the end-effector to a position close to the box. Subsequently, the box was pushed against the obstacle. Then, the box was flipped. Finally, the end-effector was moved to its final position.</p>
Full article ">Figure 6
<p>The two coordinate frames of the box flipping task shown from the side. The origin of the Cartesian frame is at the lower edge of the obstacle. The <span class="html-italic">z</span>-axes of both frames are identical and correspond to the lower edge of the obstacle (indicated by a black dot). Here, the origin is at the center of the lower edge. Please note that the frames are attached to the obstacle and not to the box.</p>
Full article ">Figure 7
<p>Experimental results for the flipping task. The MP plot shows the active MP in different colors. It can be seen from the plots that the decomposition of the task is consistent throughout all demonstrations.</p>
Full article ">Figure 8
<p>Experimental setup for the box stacking task (<b>top</b>) and pictures from the reproduction (<b>bottom</b>). The four boxes were put to random initial positions for each demonstration and the reproduction. They were tracked using AR markers.</p>
Full article ">Figure 9
<p>Data of one demonstration (in the world frame) and resulting task-decomposition for the box stacking task.</p>
Full article ">Figure 10
<p>Comparison of our method with a baseline approach and the TP-GMM. For the baseline approach, we clustered the end-points of each segment using a HMM with GMMs as state emissions. Further discussions of the results can be found in <a href="#sec5dot2-robotics-09-00045" class="html-sec">Section 5.2</a>. (<b>a</b>) In this snippet of the box stacking decomposition results, the green box was stacked on the yellow box. The decompositions are shown spatially in the coordinate frame of the yellow box. The colors correspond to the colors from <a href="#robotics-09-00045-f009" class="html-fig">Figure 9</a>. Compared to our approach, the baseline approach merges the red and green MP to a single MP. The TP-GMM in general needs more MPs, as it models a path instead of MP targets. As it is difficult to visualize the parameters of the auto-regressive model, the BP-AR-HMM is not shown here; (<b>b</b>) For one full demonstration, the most likely coordinate frames over time are shown in the lower plots. The colors indicate the active coordinate frame (frame of the yellow, green, red or blue box). Our approach clearly distinguishes between phases of getting the different boxes and stacking them on the yellow box. The other approaches either do not separate the phases in such a clear way or (TP-GMM) or pick the wrong coordinate frame in some task phases (Endpoint method and BP-AR-HMM).</p>
Full article ">Figure 11
<p>The three different experimental setups for the light bulb unscrewing task. For each setup, the light bulb holder and box were put to different locations on the two tables.</p>
Full article ">Figure 12
<p>Task-decomposition over time for one of the nine demonstrations. From the top, the plots show the position of the end-effector, the measured forces at the wrist, the orientation of the end-effector and the finger joint angles, respectively. The dashed lines indicate the zero-velocity crossings and contact changes. The plot on the bottom shows the most likely MP for each point in time. All MPs can be associated with a meaningful description (see <a href="#robotics-09-00045-t003" class="html-table">Table 3</a>).</p>
Full article ">Figure 13
<p>Task-space trajectories for all nine demonstrations of the light bulb unscrewing task. Plots (<b>a</b>,<b>b</b>) show the end-effector positions in the light bulb coordinate frame and box coordinate frame, respectively. The colors indicate to which MP each segment is assigned to. The markers correspond to the mean <math display="inline"><semantics> <mi>μ</mi> </semantics></math> of the MP targets and the ellipsoids indicate their covariance matrices <math display="inline"><semantics> <mi>Σ</mi> </semantics></math>. Please note that the position target of the blue MP is hard to see because it coincides with the target of the red MP. The targets are only plotted in the coordinate frame that was assigned to each MP. Plots (<b>c</b>,<b>d</b>) show the orientations of the end-effector and the finger configurations, respectively (both in the world frame). As for this task all three fingers were aligned equally, only the joint angle of one finger is shown in (<b>d</b>).</p>
Full article ">
17 pages, 2343 KiB  
Article
User Affect Elicitation with a Socially Emotional Robot
by Mingyang Shao, Matt Snyder, Goldie Nejat and Beno Benhabib
Robotics 2020, 9(2), 44; https://doi.org/10.3390/robotics9020044 - 3 Jun 2020
Cited by 16 | Viewed by 5409
Abstract
To effectively communicate with people, social robots must be capable of detecting, interpreting, and responding to human affect during human–robot interactions (HRIs). In order to accurately detect user affect during HRIs, affect elicitation techniques need to be developed to create and train appropriate [...] Read more.
To effectively communicate with people, social robots must be capable of detecting, interpreting, and responding to human affect during human–robot interactions (HRIs). In order to accurately detect user affect during HRIs, affect elicitation techniques need to be developed to create and train appropriate affect detection models. In this paper, we present such a novel affect elicitation and detection method for social robots in HRIs. Non-verbal emotional behaviors of the social robot were designed to elicit user affect, which was directly measured through electroencephalography (EEG) signals. HRI experiments with both younger and older adults were conducted to evaluate our affect elicitation technique and compare the two types of affect detection models we developed and trained utilizing multilayer perceptron neural networks (NNs) and support vector machines (SVMs). The results showed that; on average, the self-reported valence and arousal were consistent with the intended elicited affect. Furthermore, it was also noted that the EEG data obtained could be used to train affect detection models with the NN models achieving higher classification rates Full article
(This article belongs to the Special Issue Feature Papers 2020)
Show Figures

Figure 1

Figure 1
<p>The proposed affect elicitation and affect detection methodology.</p>
Full article ">Figure 2
<p>Affective expression examples for Pepper for (<b>a</b>) positive valence and high arousal; and (<b>b</b>) negative valence and low arousal.</p>
Full article ">Figure 3
<p>Muse sensor four electrode locations on the International 10–20 system.</p>
Full article ">Figure 4
<p>Examples of electroencephalography (EEG) signal in the time domain for the (<b>a</b>) positive valence high arousal (PH) session and (<b>b</b>) negative valence low arousal (NL) session, and the corresponding power spectral density (PSD) in the frequency domain for five consecutive sliding windows for the (<b>c</b>) PH session and (<b>d</b>) NL session for four electrode locations.</p>
Full article ">Figure 5
<p>Average computed (<b>a</b>) PSD features from the <span class="html-italic">θ</span>, <span class="html-italic">α</span>, <span class="html-italic">β</span>, and <span class="html-italic">γ</span> frequency bands, and (<b>b</b>) frontal EEG asymmetry features based on the frontal <span class="html-italic">α</span> and <span class="html-italic">β</span> band powers for the PH and NL sessions for valence and arousal obtained from the EEG signals in <a href="#robotics-09-00044-f004" class="html-fig">Figure 4</a>, respectively.</p>
Full article ">Figure 6
<p>Affect elicitation human–robot interaction (HRI) scenario.</p>
Full article ">Figure 7
<p>Box plots for reported affect for both PH and NL sessions: (<b>a</b>) valence and (<b>b</b>) arousal for all participants; and (<b>c</b>) valence and (<b>d</b>) arousal for each age group, where each box contains the interquartile range (25th to 75th percentile) of the corresponding data, yellow lines represent the median and circles represent outliers.</p>
Full article ">Figure 8
<p>Receiver operating characteristics (ROC) curve for the neural network (NN) and support vector machine (SVM) models for (<b>a</b>) valence and (<b>b</b>) arousal.</p>
Full article ">
17 pages, 8877 KiB  
Article
Design and FDM/FFF Implementation of a Compact Omnidirectional Wheel for a Mobile Robot and Assessment of ABS and PLA Printing Materials
by Elena Rubies and Jordi Palacín
Robotics 2020, 9(2), 43; https://doi.org/10.3390/robotics9020043 - 28 May 2020
Cited by 19 | Viewed by 5504
Abstract
This paper proposes the design and 3D printing of a compact omnidirectional wheel optimized to create a small series of three-wheeled omnidirectional mobile robots. The omnidirectional wheel proposed is based on the use of free-rotating passive wheels aligned transversally to the center of [...] Read more.
This paper proposes the design and 3D printing of a compact omnidirectional wheel optimized to create a small series of three-wheeled omnidirectional mobile robots. The omnidirectional wheel proposed is based on the use of free-rotating passive wheels aligned transversally to the center of the main wheel and with a constant separation gap. This paper compares a three inner-passive wheels design based on mass-produced parts and 3D printed elements. The inner passive wheel that better combines weight, cost, and friction is implemented with a metallic ball bearing fitted inside a 3D printed U-grooved ring that holds a soft toric joint. The proposed design has been implemented using acrylonitrile butadiene styrene (ABS) and tough polylactic acid (PLA) as 3D printing materials in order to empirically compare the deformation of the weakest parts of the mechanical design. The conclusion is that the most critical parts of the omnidirectional wheel are less prone to deformation and show better mechanical properties if they are printed horizontally (with the axes that hold the passive wheels oriented parallel to the build surface), with an infill density of 100% and using tough PLA rather than ABS as a 3D printing material. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>CAD design of the assistant personal robot (APR) motion system [<a href="#B29-robotics-09-00043" class="html-bibr">29</a>]: (<b>a</b>) top view with the 3 omnidirectional wheels shifted 120°; (<b>b</b>) detail of the alternated passive rollers and roller brackets used in the omnidirectional wheels of the APR, each roller uses two ball bearings to reduce the friction during the rotation.</p>
Full article ">Figure 2
<p>CAD design of the proposed omnidirectional wheel: (<b>a</b>) complete wheel; (<b>b</b>) exploded view of the T-shaped piece supporting two free-rotation inner passive wheels.</p>
Full article ">Figure 3
<p>CAD diagram showing the parameters that define the wheel design and the distribution of the passive rollers in the main wheel.</p>
Full article ">Figure 4
<p>CAD diagram showing the gap between passive rollers at different positions of the wheel: (<b>a</b>) the same T-shaped supporting structure; (<b>b</b>) consecutive T-shaped supporting structures.</p>
Full article ">Figure 5
<p>Relationship between number of inner passive wheels and the total external radius of the omnidirectional wheel for three feasible passive wheel diameter and wheel width.</p>
Full article ">Figure 6
<p>Relationship between number of inner passive wheels and the resulting internal gap for three feasible passive wheel diameter and wheel width.</p>
Full article ">Figure 7
<p>Detail of the successive layers conforming several half T-shaped structures 3D printed using black acrylonitrile butadiene styrene (ABS): horizontal and vertical orientations and stick widths of 2 and 4 mm.</p>
Full article ">Figure 8
<p>Detail of the successive layers conforming two half T-shaped structures printed with transparent polylactic acid (PLA) 3D850 (stick width of 4 mm): (<b>a</b>) printed vertically; (<b>b</b>) printed horizontally.</p>
Full article ">Figure 9
<p>Experimental configuration used in the T-structure deformation test: (<b>a</b>) sample piece during a test; (<b>b</b>) sample piece that has overcome its elastic deformation limit.</p>
Full article ">Figure 10
<p>Relationship between deformation and force applied to T-structures 3D printed with ABS: (gray dot) wide piece (Ws = 4 mm) printed horizontally, (blue dot) thin piece (Ws = 2 mm) printed horizontally, (black dot) wide piece (Ws = 4 mm) printed vertically, (magenta dot) thin piece (Ws = 2 mm) printed vertically. The gray line is the linear regression of all pieces printed horizontally (Ws = 4 and 2 mm) and the black line of all pieces printed vertically (Ws = 4 and 2 mm).</p>
Full article ">Figure 11
<p>Relationship between deformation and applied force to T-structures 3D printed horizontally with PLA 3D850 (<span class="html-italic">Ws</span> = 4 mm): (black dot) piece with a 100% infill density; (magenta dot) piece with a 30% infill density. The black line is the linear regression of all pieces printed with a 100% infill and the magenta line of all pieces printed with a 30% infill.</p>
Full article ">Figure 12
<p>Relationship between deformation and force applied to thin T-structures (<span class="html-italic">Ws</span> = 2 mm): (magenta dot) printed horizontally with a 100% infill using ABS; (black dot) printed horizontally with a 100% infill using PLA 3D850. The magenta line is the linear regression of all pieces printed with ABS and the black line the linear regression of all pieces printed with PLA 3D850.</p>
Full article ">Figure 13
<p>Detail of the omnidirectional wheel design: (<b>a</b>) complete CAD representation; (<b>b</b>) detail of the central structure of the wheel made in PLA 3D850 with a honeycomb infill density of 10%.</p>
Full article ">Figure 14
<p>Top view prototype implementation of a compact mobile robot based on the omnidirectional wheel design proposed in this paper.</p>
Full article ">
23 pages, 1806 KiB  
Review
Laparoscopic Robotic Surgery: Current Perspective and Future Directions
by Sally Kathryn Longmore, Ganesh Naik and Gaetano D. Gargiulo
Robotics 2020, 9(2), 42; https://doi.org/10.3390/robotics9020042 - 27 May 2020
Cited by 33 | Viewed by 15789
Abstract
Just as laparoscopic surgery provided a giant leap in safety and recovery for patients over open surgery methods, robotic-assisted surgery (RAS) is doing the same to laparoscopic surgery. The first laparoscopic-RAS systems to be commercialized were the Intuitive Surgical, Inc. (Sunnyvale, CA, USA) [...] Read more.
Just as laparoscopic surgery provided a giant leap in safety and recovery for patients over open surgery methods, robotic-assisted surgery (RAS) is doing the same to laparoscopic surgery. The first laparoscopic-RAS systems to be commercialized were the Intuitive Surgical, Inc. (Sunnyvale, CA, USA) da Vinci and the Computer Motion Zeus. These systems were similar in many aspects, which led to a patent dispute between the two companies. Before the dispute was settled in court, Intuitive Surgical bought Computer Motion, and thus owned critical patents for laparoscopic-RAS. Recently, the patents held by Intuitive Surgical have begun to expire, leading to many new laparoscopic-RAS systems being developed and entering the market. In this study, we review the newly commercialized and prototype laparoscopic-RAS systems. We compare the features of the imaging and display technology, surgeons console and patient cart of the reviewed RAS systems. We also briefly discuss the future directions of laparoscopic-RAS surgery. With new laparoscopic-RAS systems now commercially available we should see RAS being adopted more widely in surgical interventions and costs of procedures using RAS to decrease in the near future. Full article
(This article belongs to the Special Issue Intelligent Medical Robotics)
Show Figures

Figure 1

Figure 1
<p>Surgeons consoles. (<b>A</b>) da Vinci [<a href="#B90-robotics-09-00042" class="html-bibr">90</a>], (<b>B</b>) MiroSurge [<a href="#B63-robotics-09-00042" class="html-bibr">63</a>], (<b>C</b>) Revo-I [<a href="#B91-robotics-09-00042" class="html-bibr">91</a>] (<b>D</b>) Senhence [<a href="#B5-robotics-09-00042" class="html-bibr">5</a>], (<b>E</b>) Versius seated and (<b>F</b>) standing [<a href="#B92-robotics-09-00042" class="html-bibr">92</a>].</p>
Full article ">Figure 2
<p>Patient carts. (<b>A</b>) da Vinci Xi [<a href="#B90-robotics-09-00042" class="html-bibr">90</a>], B) da Vinci SP [<a href="#B90-robotics-09-00042" class="html-bibr">90</a>], (<b>C</b>) Senhence [<a href="#B5-robotics-09-00042" class="html-bibr">5</a>], (<b>D</b>) MiroSurge [<a href="#B63-robotics-09-00042" class="html-bibr">63</a>], (<b>E</b>) Versius [<a href="#B92-robotics-09-00042" class="html-bibr">92</a>] and (<b>F</b>) Revo-I [<a href="#B91-robotics-09-00042" class="html-bibr">91</a>].</p>
Full article ">Figure 3
<p>Effect of the fulcrum on movement of the end effector by the robot arm. Fulcrum is located within the abdominal wall. When the robot arm moves to the right, the end effector will move left. When the robot arm moves left, the end effector will move right. The motions of the robot arm around the fulcrum are inverted.</p>
Full article ">Figure 4
<p>The range of motion of the da Vinci EndoWrist end effector. The EndoWrist has motions like the human hand, flexion and extension; adduction and abduction; and grasping.</p>
Full article ">
12 pages, 868 KiB  
Article
Real-Time Cable Force Calculation beyond the Wrench-Feasible Workspace
by Roland Boumann and Tobias Bruckmann
Robotics 2020, 9(2), 41; https://doi.org/10.3390/robotics9020041 - 27 May 2020
Cited by 7 | Viewed by 4636
Abstract
Under special circumstances, a cable-driven parallel robot (CDPR) may leave its wrench-feasible-workspace. Standard approaches for the computation of set-point cable forces are likely to fail in this case. The novel nearest corner method for calculating appropriate cable forces when the CDPR is outside [...] Read more.
Under special circumstances, a cable-driven parallel robot (CDPR) may leave its wrench-feasible-workspace. Standard approaches for the computation of set-point cable forces are likely to fail in this case. The novel nearest corner method for calculating appropriate cable forces when the CDPR is outside of its wrench-feasible-workspace was introduced in former work of the authors. The obtained cable force distributions aim at continuity and generate wrenches close to the desired values. The method employs geometrical operations in the cable force space and promises real-time usability because of its non-iterative structure. In a simplified simulation, a cable break scenario was used to carry out more detailed testing of the method regarding different parameters, a higher number of cables, and the numerical efficiency. A brief discussion about the continuity of the method when entering the wrench-feasible-workspace is presented. Full article
(This article belongs to the Special Issue Theory and Practice on Robotics and Mechatronics)
Show Figures

Figure 1

Figure 1
<p>Model parameters of the cable-robot.</p>
Full article ">Figure 2
<p>Visualization of solution space, cube, manifold, and map for <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Projection of corners on solution space for an empty set of <math display="inline"><semantics> <mi mathvariant="script">F</mi> </semantics></math>, example for <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>3</mn> <mo>,</mo> <mi>r</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Situation after break of cable associated with upper left winch. Movement of end-effector with comparison of different exponential weights <span class="html-italic">p</span>, numbering of winches clockwise, starting at bottom left. <math display="inline"><semantics> <msub> <mi>t</mi> <mi>end</mi> </msub> </semantics></math> specifies the time to reach the goal position.</p>
Full article ">Figure 5
<p>Static workspace of a model with <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> before and after cable failure, black dots indicate the winch positions.</p>
Full article ">Figure 6
<p>Position, velocity, switch between both methods, and forces in simulation with <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Calculation time throughout a trajectory with exponential weights of <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>.</p>
Full article ">
22 pages, 3449 KiB  
Article
Benchmark Dataset Based on Category Maps with Indoor–Outdoor Mixed Features for Positional Scene Recognition by a Mobile Robot
by Hirokazu Madokoro, Hanwool Woo, Stephanie Nix and Kazuhito Sato
Robotics 2020, 9(2), 40; https://doi.org/10.3390/robotics9020040 - 26 May 2020
Cited by 4 | Viewed by 4367
Abstract
This study was conducted to develop original benchmark datasets that simultaneously include indoor–outdoor visual features. Indoor visual information related to images includes outdoor features to a degree that varies extremely by time, weather, and season. We obtained time-series scene images using a wide [...] Read more.
This study was conducted to develop original benchmark datasets that simultaneously include indoor–outdoor visual features. Indoor visual information related to images includes outdoor features to a degree that varies extremely by time, weather, and season. We obtained time-series scene images using a wide field of view (FOV) camera mounted on a mobile robot moving along a 392-m route in an indoor environment surrounded by transparent glass walls and windows for two directions in three seasons. For this study, we propose a unified method for extracting, characterizing, and recognizing visual landmarks that are robust to human occlusion in a real environment in which robots coexist with people. Using our method, we conducted an evaluation experiment to recognize scenes divided up to 64 zones with fixed intervals. The experimentally obtained results using the datasets revealed the performance and characteristics of meta-parameter optimization, mapping characteristics to category maps, and recognition accuracy. Moreover, we visualized similarities between scene images using category maps. We also identified cluster boundaries obtained from mapping weights. Full article
Show Figures

Figure 1

Figure 1
<p>Difference of similar scenes in daytime and nighttime at the same position.</p>
Full article ">Figure 2
<p>Proposed method of the feature-extraction module for visual landmark (VL) extraction and part-based feature description.</p>
Full article ">Figure 3
<p>Proposed method of the recognition module for scene recognition and cluster boundary extraction of category map.</p>
Full article ">Figure 4
<p>Mobile robot (Double; Double Robotics, Inc. Burlingame, CA, USA).), camera (PIXPRO SP360; Eastman Kodak Co. Rochester, NY, USA).), and a scene image in an environment where indoor–outdoor features are mixed.</p>
Full article ">Figure 5
<p>Map of the experiment environment.</p>
Full article ">Figure 6
<p>Zones of five division types with fixed intervals.</p>
Full article ">Figure 7
<p>Scene appearance differences depending on locomotion directions at similar positions</p>
Full article ">Figure 8
<p>Scene appearance differences depending on season at similar positions.</p>
Full article ">Figure 9
<p>Results of comparison of extracted features with and without You Only Live Once (YOLO).</p>
Full article ">Figure 10
<p>Parameter experiment results of the number of self-organizing maps (SOMs) and counter propagation networks (CPNs) mapping units (<math display="inline"><semantics> <mrow> <mi>S</mi> <mo>,</mo> <mi>Q</mi> </mrow> </semantics></math>).</p>
Full article ">Figure 11
<p>Parameter experiment results of learning iterations.</p>
Full article ">Figure 12
<p>Recognition accuracy of summer datasets (SD).</p>
Full article ">Figure 13
<p>Recognition accuracy of autumn datasets (AD).</p>
Full article ">Figure 14
<p>Recognition accuracy of winter datasets (WD).</p>
Full article ">Figure 15
<p>Heatmap confusion matrix.</p>
Full article ">Figure 16
<p>Heatmap confusion matrix (HCM) example with 16 zones.</p>
Full article ">Figure 17
<p>Results of category maps that visualize inter-zone relations.</p>
Full article ">Figure 18
<p>Results of U-Matrix extraction of cluster boundaries.</p>
Full article ">
19 pages, 5281 KiB  
Article
The Role of Personality Factors and Empathy in the Acceptance and Performance of a Social Robot for Psychometric Evaluations
by Silvia Rossi, Daniela Conti, Federica Garramone, Gabriella Santangelo, Mariacarla Staffa, Simone Varrasi and Alessandro Di Nuovo
Robotics 2020, 9(2), 39; https://doi.org/10.3390/robotics9020039 - 23 May 2020
Cited by 43 | Viewed by 7714
Abstract
Research and development in socially assistive robotics have produced several novel applications in the care of senior people. However, some are still unexplored such as their use as psychometric tools allowing for a quick and dependable evaluation of human users’ intellectual capacity. To [...] Read more.
Research and development in socially assistive robotics have produced several novel applications in the care of senior people. However, some are still unexplored such as their use as psychometric tools allowing for a quick and dependable evaluation of human users’ intellectual capacity. To fully exploit the application of a social robot as a psychometric tool, it is necessary to account for the users’ factors that might influence the interaction with a robot and the evaluation of user cognitive performance. To this end, we invited senior participants to use a prototype of a robot-led cognitive test and analyzed the influence of personality traits and user’s empathy on the cognitive performance and technology acceptance. Results show a positive influence of a personality trait, the “openness to experience”, on the human-robot interaction, and that other factors, such as anxiety, trust, and intention to use, are influencing technology acceptance and correlate the evaluation by psychometric tests. Full article
(This article belongs to the Special Issue Robotics Research for Healthy Living and Active Ageing)
Show Figures

Figure 1

Figure 1
<p>Pictures of the testing environment taken from different sides.</p>
Full article ">Figure 2
<p>Testing procedure. The green parts were performed by a psychologist in a standard room. The blue parts were performed with the Pepper robot in the testing environment.</p>
Full article ">Figure 3
<p>Screen-shots of Task 1: robotic administration of the MoCA Test (<b>left</b>); and of Task 2: interaction and monitoring (<b>right</b>).</p>
Full article ">Figure 4
<p>Average global scores of NEO-PI-3 personality factors.</p>
Full article ">
26 pages, 6208 KiB  
Article
A Note on Equivalent Linkages of Direct-Contact Mechanisms
by Wen-Tung Chang and Dung-Yan Yang
Robotics 2020, 9(2), 38; https://doi.org/10.3390/robotics9020038 - 20 May 2020
Cited by 3 | Viewed by 7829
Abstract
In this paper, the inequivalence of the direct-contact mechanisms and their equivalent four-bar linkages in jerk analysis is discussed. Kinematic analyses for three classical types of direct-contact mechanisms consisting of: (a) higher pairs with permanently invariant curvature centers, (b) higher pairs with suddenly [...] Read more.
In this paper, the inequivalence of the direct-contact mechanisms and their equivalent four-bar linkages in jerk analysis is discussed. Kinematic analyses for three classical types of direct-contact mechanisms consisting of: (a) higher pairs with permanently invariant curvature centers, (b) higher pairs with suddenly changed curvature, and (c) higher pairs with continuously varying curvature are performed, respectively, through their representative case studies. The analyzed results show that the equivalent four-bar linkage cannot give a correct value of jerk for most situations in the three case studies. Subsequently, the concept of “equivalent six-bar linkage” for direct-contact mechanisms is proposed in order to discuss the infeasibility of the equivalent four-bar linkage for jerk analysis. It is found that the suddenly changed or continuously varying curvature of the higher pairs is not considered in sudden or continuous link-length variations of the equivalent four-bar linkage, which further leads to inconsistency between the angular accelerations of the coupler and the contact normal, and finally results in the infeasibility of the equivalent four-bar linkage for jerk analysis of most direct-contact mechanisms. It is also found that the concept of equivalent six-bar linkage could be applied to evaluate more higher-order time derivatives for most direct-contact mechanisms. The presented case studies and discussion can give demonstrations for understanding the inequivalence of the direct-contact mechanisms and their equivalent four-bar linkages in the aspect of jerk analysis. Full article
(This article belongs to the Special Issue Theory and Practice on Robotics and Mechatronics)
Show Figures

Figure 1

Figure 1
<p>Illustration of a three-link direct-contact mechanism and its equivalent four-bar linkage; (<b>a</b>) the direct-contact mechanism and (<b>b</b>) the equivalent four-bar linkage.</p>
Full article ">Figure 2
<p>Illustration of a planar gear mechanism with a pair of involute spur gears and its equivalent four-bar linkage; (<b>a</b>) the planar gear mechanism and (<b>b</b>) the equivalent four-bar linkage.</p>
Full article ">Figure 3
<p>Illustration of a disk cam mechanism with a circular-arc cam and an oscillating roller follower and its equivalent four-bar linkages; (<b>a</b>) the disk cam mechanism and (<b>b</b>) the equivalent four-bar linkages.</p>
Full article ">Figure 4
<p>Angular motion curves of the follower for a disk cam mechanism with a circular-arc cam and an oscillating roller follower; (<b>a</b>) the angular position, (<b>b</b>) the angular velocity, (<b>c</b>) the angular acceleration, and (<b>d</b>) the angular jerk of the follower.</p>
Full article ">Figure 5
<p>Angular motion curves of the contact normal for a disk cam mechanism with a circular-arc cam and an oscillating roller follower; (<b>a</b>) the angular position, (<b>b</b>) the angular velocity, (<b>c</b>) the angular acceleration, and (<b>d</b>) the angular jerk of the contact normal.</p>
Full article ">Figure 6
<p>Illustration of a disk cam mechanism with a double-dwell cam and an oscillating roller follower.</p>
Full article ">Figure 7
<p>Illustration of a disk cam mechanism with a double-dwell cam and an oscillating roller follower and its equivalent four-bar linkage; (<b>a</b>) the disk cam mechanism and (<b>b</b>) the equivalent four-bar linkage.</p>
Full article ">Figure 8
<p>Variations of link lengths and angular positions for an equivalent four-bar linkage of a disk cam mechanism with a double-dwell cam and an oscillating roller follower; (<b>a</b>) the link lengths and (<b>b</b>) the angular positions of links.</p>
Full article ">Figure 9
<p>Kinematic analysis results for a disk cam mechanism with a double-dwell cam and an oscillating roller follower and its equivalent four-bar linkage; (<b>a</b>) the angular velocities, (<b>b</b>) the angular accelerations, and (<b>c</b>) the angular jerks.</p>
Full article ">Figure 10
<p>Comparison results between motion curves obtained in <a href="#robotics-09-00038-f009" class="html-fig">Figure 9</a>; (<b>a</b>) the sum of angular velocities, (<b>b</b>) the sum of angular accelerations, and (<b>c</b>) the sum of angular jerks.</p>
Full article ">Figure 11
<p>Illustration of a disk cam mechanism with a double-dwell cam and an oscillating roller follower and its equivalent six-bar linkage; (<b>a</b>) the disk cam mechanism and (<b>b</b>) the equivalent six-bar linkage.</p>
Full article ">Figure 12
<p>Kinematic analysis results for a disk cam mechanism with a double-dwell cam and an oscillating roller follower and its equivalent six-bar linkage; (<b>a</b>) the angular velocities, (<b>b</b>) the angular accelerations, and (<b>c</b>) the angular jerks.</p>
Full article ">Figure 13
<p>Comparison results between motion curves obtained in <a href="#robotics-09-00038-f012" class="html-fig">Figure 12</a>; (<b>a</b>) the sum of angular velocities, (<b>b</b>) the sum of angular accelerations, and (<b>c</b>) the sum of angular jerks.</p>
Full article ">Figure 14
<p>Time derivatives of angles <span class="html-italic">θ</span><sub>4</sub> and <span class="html-italic">λ</span> for a disk cam mechanism with a double-dwell cam and an oscillating roller follower and its equivalent four-bar and six-bar linkages; (<b>a</b>) time derivatives of angle <span class="html-italic">θ</span><sub>4</sub> of the equivalent four-bar linkage, (<b>b</b>) time derivatives of angle <span class="html-italic">θ</span><sub>4</sub> of the equivalent six-bar linkage, and (<b>c</b>) time derivatives of angle <span class="html-italic">λ</span>.</p>
Full article ">Figure 15
<p>Ping analysis results for a disk cam mechanism with a double-dwell cam and an oscillating roller follower and its equivalent six-bar linkage; (<b>a</b>) the angular pings and (<b>b</b>) the sum of angular pings.</p>
Full article ">
22 pages, 9097 KiB  
Article
Testing Walking-Induced Vibration of Floors Using Smartphones Recordings
by Luca Martinelli, Vitomir Racic, Bruno Alberto Dal Lago and Francesco Foti
Robotics 2020, 9(2), 37; https://doi.org/10.3390/robotics9020037 - 20 May 2020
Cited by 7 | Viewed by 4598
Abstract
Smartphone technology is rapidly evolving, adding sensors of growing accuracy and precision. Structural engineers are among customers who indirectly benefit from such technological advances. This paper tests whether accelerometers installed in new generations of smartphones can reach the accuracy of professional accelerometers created [...] Read more.
Smartphone technology is rapidly evolving, adding sensors of growing accuracy and precision. Structural engineers are among customers who indirectly benefit from such technological advances. This paper tests whether accelerometers installed in new generations of smartphones can reach the accuracy of professional accelerometers created for vibration monitoring of civil engineering structures, and how they can be useful. The paper describes an experimental study designed to measure walking-induced vibrations of a slender prefabricated prestressed concrete slab. Both traditional, high-accuracy, accelerometers and those integrated into commercial smartphones were used for experimental data collection. Direct comparison of the recordings yielded two key findings: the accuracy of smartphone accelerometers largely depends on the specific smartphone model, and nevertheless is satisfactory for preliminary modal testing at the very least. Furthermore, the smartphone measured accelerations of the lower back were used successfully to indirectly measure pedestrian walking loads. Full article
(This article belongs to the Special Issue Advances in Inspection Robotic Systems)
Show Figures

Figure 1

Figure 1
<p>Example of the response in terms of vertical acceleration of (<b>a</b>) a low-frequency floor (e.g., 2 Hz) and, (<b>b</b>), a high-frequency floor (e.g., 11 Hz). Adapted from [<a href="#B35-robotics-09-00037" class="html-bibr">35</a>].</p>
Full article ">Figure 2
<p>(<b>a</b>) Sensor positions (LB = fifth lumbar vertebra, used in this work; N = navel). (<b>b</b>) Comparison of ground reaction force (GRF, adapted from [<a href="#B8-robotics-09-00037" class="html-bibr">8</a>]).</p>
Full article ">Figure 3
<p>The prefabricated system DOMUS DRY<sup>®</sup>. Intended floor span is in the range 8–12 m (image used with permission of DLC Consulting).</p>
Full article ">Figure 4
<p>Prototype of a completely prefabricated system (DOMUS DRY<sup>®</sup> SYSTEM) using dry connections between components (<b>a</b>), and details of the floor system (<b>b</b>) (images used with permission of DLC Consulting).</p>
Full article ">Figure 5
<p>Plan view of the tested floor with the two arrangements of sensors used in the testing campaign (<b>a</b>,<b>b</b>). Locations A1–A4 mark the position of the reference accelerometers and of smartphones. A moment of Test 3. The person walking wears a smartphone at position LB of <a href="#robotics-09-00037-f002" class="html-fig">Figure 2</a>, while a second smartphone is visible just in front of the person, near one of the reference accelerometers (<b>c</b>).</p>
Full article ">Figure 6
<p>Time history of recorded vertical acceleration by base-line accelerometer A1 during heel drop test Test 7, and corresponding recording by the nearby smartphone Phone 2 (<b>a</b>). Fourier amplitude spectrum of recordings of Test 7, showing the resonance frequency of the first four structural modes (<b>b</b>).</p>
Full article ">Figure 7
<p>Time history of recorded vertical acceleration by baseline accelerometer A2 during heel drop test Test 7, and corresponding recording by the nearby smartphone Phone 1 (<b>a</b>). Fourier amplitude spectrum of recordings of Test 7, showing the resonance frequency of the first four structural modes (<b>b</b>). Note the shift with respect to the baseline accelerometer A2.</p>
Full article ">Figure 8
<p>Variation of the sampling time interval for Smartphone 1, (<b>a</b>), and Smartphone 2, (<b>b</b>), during Test 7. The nominal sampling interval was <span class="html-italic">dt</span> = 0.005 s for Phone 1 and <span class="html-italic">dt</span> = 0.002 s for Phone 2.</p>
Full article ">Figure 9
<p>Recomputed Fourier amplitude spectrum for Test 7 and Phone 1 on the base of the average sampling frequency, <span class="html-italic">f<sub>s</sub></span>, computed from recorded time stamps: <span class="html-italic">f<sub>s</sub></span> = 198.33 Hz.</p>
Full article ">Figure 10
<p>Measured accelerations at stations from A1 to A4 for Test 3.</p>
Full article ">Figure 11
<p>Measured accelerations at stations from A1 to A4 for Test 4.</p>
Full article ">Figure 12
<p>Numerical model of the floor system, developed inside the Midas Finite Element system. The marks highlight the position of the accelerometers in <a href="#robotics-09-00037-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 13
<p>Restraint conditions assumed in the analyses at the girders’ ends.</p>
Full article ">Figure 14
<p>GRF of each step from the Bachmann and Baumann model used in load model L1 for the numerical simulation of Test 3 and Test 4.</p>
Full article ">Figure 15
<p>Loading scheme for the Bachmann and Baumann L1 load model (<b>a</b>) and for model L2 and L3 (<b>b</b>).</p>
Full article ">Figure 16
<p>GRF obtained from recorded vertical acceleration near the center of mass (CoM) of the walking person, for Test 3 (<b>a</b>), for Test 4 (<b>b</b>).</p>
Full article ">Figure 17
<p>Numerically computed accelerations, with loading function L1, at stations from A1 to A4 for Test 3.</p>
Full article ">Figure 18
<p>Numerically computed accelerations, with loading function L1, at stations from A1 to A4 for Test 4.</p>
Full article ">Figure 19
<p>Numerically computed accelerations, with loading L3, at stations from A1 to A4 for Test 3.</p>
Full article ">Figure 20
<p>Numerically computed accelerations, with loading L3, at stations from A1 to A4 for Test 4.</p>
Full article ">
4 pages, 157 KiB  
Editorial
Advances in Robotics and Mechatronics
by Yukio Takeda, Giuseppe Carbone and Shaoping Bai
Robotics 2020, 9(2), 36; https://doi.org/10.3390/robotics9020036 - 18 May 2020
Cited by 1 | Viewed by 4748
Abstract
Robotics and Mechatronics technologies have become essential for developing devices/machines to support human life and society [...] Full article
(This article belongs to the Special Issue Advances in Robotics and Mechatronics)
12 pages, 1751 KiB  
Article
LIDAR Scan Matching in Off-Road Environments
by Hao Fu and Rui Yu
Robotics 2020, 9(2), 35; https://doi.org/10.3390/robotics9020035 - 15 May 2020
Cited by 5 | Viewed by 4978
Abstract
Accurately matching the LIDAR scans is a critical step for an Autonomous Land Vehicle (ALV). Whilst most previous works have focused on the urban environment, this paper focuses on the off-road environment. Due to the lack of a publicly available dataset for algorithm [...] Read more.
Accurately matching the LIDAR scans is a critical step for an Autonomous Land Vehicle (ALV). Whilst most previous works have focused on the urban environment, this paper focuses on the off-road environment. Due to the lack of a publicly available dataset for algorithm comparison, a dataset containing LIDAR pairs with varying amounts of offsets in off-road environments is firstly constructed. Several popular scan matching approaches are then evaluated using this dataset. Results indicate that global approaches, such as Correlative Scan Matching (CSM), perform best on large offset datasets, whilst local scan matching approaches perform better on small offset datasets. To combine the merits of both approaches, a two-stage fusion algorithm is designed. In the first stage, several transformation candidates are sampled from the score map of the CSM algorithm. Local scan matching approaches then start from these transformation candidates to obtain the final results. Four performance indicators are also designed to select the best transformation. Experiments on a real-world dataset prove the effectiveness of the proposed approach. Full article
Show Figures

Figure 1

Figure 1
<p>Typical scenes in the off-road environment.</p>
Full article ">Figure 2
<p>An illustration of the factor graph used to obtain the ground-truth pose for each LIDAR scan.</p>
Full article ">Figure 3
<p>Registration errors for various approaches on Dataset#0.5 and Dataset#2. <math display="inline"><semantics> <msub> <mi mathvariant="script">C</mi> <mn>3</mn> </msub> </semantics></math> means the removing of the ground points.</p>
Full article ">Figure 4
<p>Registration errors for various approaches on Dataset#5 and Dataset#10. <math display="inline"><semantics> <msub> <mi mathvariant="script">C</mi> <mn>3</mn> </msub> </semantics></math> means the removing of the ground points.</p>
Full article ">Figure 5
<p>The scan matching process of combining CSM with LOAM (Scenario-I). (<b>a</b>,<b>b</b>) shows the Gaussian blurred obstacle map and height map for the source scan and target scan. (<b>c</b>) shows the registration results of the extended CSM that fuses obstacles and height representations. (<b>d</b>) shows the score map and several possible transformations which are sampled from the high confidence regions. The sampled transformations are passed to the LOAM algorithm, and the evaluation system scores each matching performance. The final 3D registration is selected with the maximum performance score, as shown in (<b>e</b>).</p>
Full article ">Figure 6
<p>The scan matching process of combining CSM with LOAM (Scenario-II). Another typical scenario in the off-road environment which is more difficult for the scan matching algorithm. Because the uncertainty of this scenario is relatively large, multiple high confidence transformations were sampled to pass to the LOAM algorithm (<b>a</b>–<b>e</b>).</p>
Full article ">Figure 7
<p>Registration errors for various approaches. From left to right: results on Dataset#2, Dataset#5, Dataset#10 respectively.</p>
Full article ">
19 pages, 3226 KiB  
Article
The Use of UTAUT and Post Acceptance Models to Investigate the Attitude towards a Telepresence Robot in an Educational Setting
by Jeonghye Han and Daniela Conti
Robotics 2020, 9(2), 34; https://doi.org/10.3390/robotics9020034 - 13 May 2020
Cited by 50 | Viewed by 9689
Abstract
(1) Background: in the last decade, various investigations into the field of robotics have created several opportunities for further innovation to be possible in student education. However, despite scientific evidence, there is still strong scepticism surrounding the use of robots in some social [...] Read more.
(1) Background: in the last decade, various investigations into the field of robotics have created several opportunities for further innovation to be possible in student education. However, despite scientific evidence, there is still strong scepticism surrounding the use of robots in some social fields, such as personal care and education. (2) Methods: in this research, we present a new tool named the HANCON model, which was developed merging and extending the constructs of two solid and proven models—the Unified Theory of Acceptance and Use of Technology (UTAUT) model used to examine the factors that may influence the decision to use a telepresence robot as an instrument in educational practice, and the Post Acceptance Model used to evaluate acceptability after the actual use of a telepresence robot. The new tool is implemented and used to study the acceptance of a double telepresence robot by 112 pre-service teachers in an educational setting. (3) Results: the analysis of the experimental results predicts and demonstrate a positive attitude towards the use of telepresence robot in a school setting and confirm the applicability of the model in an educational context. (4) Conclusions: the constructs of the HANCON model could predict and explain the acceptance of social telepresence robots in social contexts. Full article
Show Figures

Figure 1

Figure 1
<p>Hypothetical construct interrelations for the HANCON model.</p>
Full article ">Figure 2
<p>Double telepresence robot with t-shirt used in the experiment.</p>
Full article ">Figure 3
<p>Examples of robot-assisted learning with KUBI telepresence robot.</p>
Full article ">Figure 4
<p>KUBI telepresence robot.</p>
Full article ">Figure 5
<p>(<b>a</b>,<b>b</b>) Examples of interaction with the Double robot.</p>
Full article ">Figure 6
<p>Demonstration of mathematics problem-solving.</p>
Full article ">Figure 7
<p>Participants interaction for robot-assisted learning.</p>
Full article ">Figure 8
<p>A participant controls the robot interacting with the other participants.</p>
Full article ">Figure 9
<p>Final model: interrelations confirmed by regression scores for the experiments. Dotted line: not confirmed by any regression analysis.</p>
Full article ">Figure A1
<p>Questionnaires in Korean (used in this study) and for comparison in English, code, constructs, definition, and items.</p>
Full article ">
12 pages, 2679 KiB  
Article
Collection and Analysis of Human Upper Limbs Motion Features for Collaborative Robotic Applications
by Elisa Digo, Mattia Antonelli, Valerio Cornagliotto, Stefano Pastorelli and Laura Gastaldi
Robotics 2020, 9(2), 33; https://doi.org/10.3390/robotics9020033 - 11 May 2020
Cited by 20 | Viewed by 4631
Abstract
(1) Background: The technologies of Industry 4.0 are increasingly promoting an operation of human motion prediction for improvement of the collaboration between workers and robots. The purposes of this study were to fuse the spatial and inertial data of human upper limbs for [...] Read more.
(1) Background: The technologies of Industry 4.0 are increasingly promoting an operation of human motion prediction for improvement of the collaboration between workers and robots. The purposes of this study were to fuse the spatial and inertial data of human upper limbs for typical industrial pick and place movements and to analyze the collected features from the future perspective of collaborative robotic applications and human motion prediction algorithms. (2) Methods: Inertial Measurement Units and a stereophotogrammetric system were adopted to track the upper body motion of 10 healthy young subjects performing pick and place operations at three different heights. From the obtained database, 10 features were selected and used to distinguish among pick and place gestures at different heights. Classification performances were evaluated by estimating confusion matrices and F1-scores. (3) Results: Values on matrices diagonals were definitely greater than those in other positions. Furthermore, F1-scores were very high in most cases. (4) Conclusions: Upper arm longitudinal acceleration and markers coordinates of wrists and elbows could be considered representative features of pick and place gestures at different heights, and they are consequently suitable for the definition of a human motion prediction algorithm to be adopted in effective collaborative robotics industrial applications. Full article
Show Figures

Figure 1

Figure 1
<p>Configuration of markers (sketched as blue dots) and Inertial Measurement Units (IMUs) (sketched as orange rectangles) adopted for the test: (<b>a</b>) Top view of the table with the TAB-IMU and the reference system defined with markers A, B and C; (<b>b</b>) IMUs and markers placement representation on upper body of participants; (<b>c</b>) IMUs and markers placement example on one of the subjects. ACR and ACL: acromions; EMR, ELR, EML, ELL: elbow condyles; IJ: between suprasternal notches; RFA: right forearm, RUA: right upper arm, RSH and LSH: shoulders, THX: sternum, PLV: pelvis, SFA: on RFA-IMU; SUA: on RUA-IMU; T8: the spinal process of the 8<sup>th</sup> thoracic vertebra, WMR, WLR, WML, WLL: styloid processes.</p>
Full article ">Figure 2
<p>Top view (<b>a</b>) and perspective view (<b>b</b>) of the setup adopted for the test. Three colored boxes at different heights (white = low, black = medium, red = high); hand silhouettes indicating hands neutral position and the cross marking the box placement on the table are visible; (<b>c</b>) table of the randomized sequence of pick and place gestures.</p>
Full article ">Figure 3
<p>The seven steps of pick and place task: (<b>a</b>) start in neutral position; (<b>b</b>) pick the black box; (<b>c</b>) place the box on the table; (<b>d</b>) return in neutral position; (<b>e</b>) pick the same box; (<b>f</b>) replace the box in its initial position; (<b>g</b>) return in neutral position.</p>
Full article ">Figure 4
<p>Anatomical reference systems (blue) and technical reference systems (green) defined from markers coordinates: (<b>a</b>) right forearm systems; (<b>b</b>) right upper arm systems; (<b>c</b>) trunk system.</p>
Full article ">Figure 5
<p>Algorithm for the distinction among pick and place gestures. Three horizontal black lines are inserted: a continuous line (m) and two dashed lines (m ± σ/2). The mean values of each pair of peaks are represented through dots recalling the boxes’ colors.</p>
Full article ">Figure 6
<p>Confusion matrices estimated for all selected features. Actual numbers of gestures are on rows, whereas predicted ones are on columns.</p>
Full article ">
18 pages, 5961 KiB  
Review
A Survey on Mechanical Solutions for Hybrid Mobile Robots
by Matteo Russo and Marco Ceccarelli
Robotics 2020, 9(2), 32; https://doi.org/10.3390/robotics9020032 - 8 May 2020
Cited by 26 | Viewed by 8681
Abstract
This paper presents a survey on mobile robots as systems that can move in different environments with walking, flying and swimming up to solutions that combine those capabilities. The peculiarities of these mobile robots are analyzed with significant examples as references and a [...] Read more.
This paper presents a survey on mobile robots as systems that can move in different environments with walking, flying and swimming up to solutions that combine those capabilities. The peculiarities of these mobile robots are analyzed with significant examples as references and a specific case study is presented as from the direct experiences of the authors for the robotic platform HeritageBot, in applications within the frame of Cultural Heritage. The hybrid design of mobile robots is explained as integration of different technologies to achieve robotic systems with full mobility. Full article
(This article belongs to the Special Issue Feature Papers 2020)
Show Figures

Figure 1

Figure 1
<p>Hybrid locomotion modes (T – terrestrial, A – aerial, W – aquatic).</p>
Full article ">Figure 2
<p>Terrestrial robots with independent wheels and legs: (<b>a</b>) a hybrid robot with bounding gait [<a href="#B108-robotics-09-00032" class="html-bibr">108</a>]; (<b>b</b>) top view of PAW [<a href="#B110-robotics-09-00032" class="html-bibr">110</a>]. Reproduced with permission 4822431205001 from J.A. Smith, IEEE Proceedings; published by IEEE, 2006; and with permission 4822431064892 from J.A. Smith, IEEE Proceedings; published by IEEE, 2006.</p>
Full article ">Figure 3
<p>Terrestrial robots with transforming wheels and legs: (<b>a</b>) Quattroped (legged configuration) [<a href="#B114-robotics-09-00032" class="html-bibr">114</a>]; (<b>b</b>) Quattroped (wheeled configuration); (<b>c</b>) Wheel-leg hybrid robot (legged configuration) [<a href="#B115-robotics-09-00032" class="html-bibr">115</a>]; (<b>d</b>) Wheel-leg hybrid robot (wheeled configuration) [<a href="#B115-robotics-09-00032" class="html-bibr">115</a>]. Reproduced with permission 4822430789880 from Shen-Chiang Chen, IEEE/ASME Transactions on Mechatronics; published by IEEE, 2014; and with permission 4822440156230 from Kenjiro Tadakuma, IEEE Proceedings; published by IEEE, 2010.</p>
Full article ">Figure 4
<p>Terrestrial hybrid robots without wheels: (<b>a</b>) MorphEx, a transformational legged/rolling robot [<a href="#B118-robotics-09-00032" class="html-bibr">118</a>], (<b>b</b>) Rebis, a walking/snake-like robot [<a href="#B119-robotics-09-00032" class="html-bibr">119</a>]. Reproduced with permission 4822440256372 from Rohan Thakker, IEEE Proceedings; published by IEEE, 2014.</p>
Full article ">Figure 5
<p>Amphibious hybrid robots: (<b>a</b>) Aqua [<a href="#B122-robotics-09-00032" class="html-bibr">122</a>], (<b>b</b>) Water surface microrobot [<a href="#B123-robotics-09-00032" class="html-bibr">123</a>]. Reproduced with permission 4822431455398 from Gregory Dudek, Computer Magazine; published by IEEE, 2007; and with permission under a Creative Commons Attribution 4.0 International License from Yufeng Chen, Neel Doshi, Benjamin Goldberg, Hongqiang Wang &amp; Robert J. Wood, Nature Communication; published by Nature, 2018.</p>
Full article ">Figure 6
<p>Swimming/crawling amphibious snake robot [<a href="#B127-robotics-09-00032" class="html-bibr">127</a>]. Reproduced with permission 4822431347404 from A. Crespi, IEEE Proceedings; published by IEEE, 2005.</p>
Full article ">Figure 7
<p>Flying/walking robots: “Flying monkey” [<a href="#B131-robotics-09-00032" class="html-bibr">131</a>]. Reproduced with permission 4822440040666 from Yash Mulgaonkar, IEEE Proceedings; published by IEEE, 2016.</p>
Full article ">Figure 8
<p>Hybrid design of the HeritageBot platform.</p>
Full article ">Figure 9
<p>Step-climbing operation of the HeritageBot Prototype.</p>
Full article ">Figure 10
<p>Flight operation of the HeritageBot Prototype with legs for all-terrain landing.</p>
Full article ">
20 pages, 3124 KiB  
Article
Developing Emotion-Aware Human–Robot Dialogues for Domain-Specific and Goal-Oriented Tasks
by Jhih-Yuan Huang, Wei-Po Lee, Chen-Chia Chen and Bu-Wei Dong
Robotics 2020, 9(2), 31; https://doi.org/10.3390/robotics9020031 - 7 May 2020
Cited by 8 | Viewed by 5615
Abstract
Developing dialogue services for robots has been promoted nowadays for providing natural human–robot interactions to enhance user experiences. In this study, we adopted a service-oriented framework to develop emotion-aware dialogues for service robots. Considering the importance of the contexts and contents of dialogues [...] Read more.
Developing dialogue services for robots has been promoted nowadays for providing natural human–robot interactions to enhance user experiences. In this study, we adopted a service-oriented framework to develop emotion-aware dialogues for service robots. Considering the importance of the contexts and contents of dialogues in delivering robot services, our framework employed deep learning methods to develop emotion classifiers and two types of dialogue models of dialogue services. In the first type of dialogue service, the robot works as a consultant, able to provide domain-specific knowledge to users. We trained different neural models for mapping questions and answering sentences, tracking the human emotion during the human–robot dialogue, and using the emotion information to decide the responses. In the second type of dialogue service, the robot continuously asks the user questions related to a task with a specific goal, tracks the user’s intention through the interactions and provides suggestions accordingly. A series of experiments and performance comparisons were conducted to evaluate the major components of the presented framework and the results showed the promise of our approach. Full article
(This article belongs to the Special Issue Theory and Practice on Robotics and Mechatronics)
Show Figures

Figure 1

Figure 1
<p>Overview of the proposed framework for the human–robot dialogues.</p>
Full article ">Figure 2
<p>The deep learning model used for the emotion recognition.</p>
Full article ">Figure 3
<p>The deep learning model used for the domain-specific human–robot dialogues.</p>
Full article ">Figure 4
<p>The framework used for the task-oriented dialogues.</p>
Full article ">Figure 5
<p>The neural model used for the recommendation.</p>
Full article ">Figure 6
<p>Results of the three machine learning methods; (<b>a</b>) without and (<b>b</b>) with the enhanced techniques of the semantic rules and data balance.</p>
Full article ">Figure 7
<p>Performance comparison of the two methods for the original dataset: (<b>a</b>) LSTM-CNN model; (<b>b</b>) embedding model.</p>
Full article ">Figure 8
<p>Performance comparison of the two methods for the translated dataset: (<b>a</b>) LSTM-CNN model; (<b>b</b>) embedding model.</p>
Full article ">Figure 9
<p>Performance of learning the neural belief tracker: (<b>a</b>) accuracy; (<b>b</b>) loss.</p>
Full article ">Figure 10
<p>Comparison of the different activation functions in: (<b>a</b>) training phase; (<b>b</b>) test phase.</p>
Full article ">Figure 11
<p>Comparison of the different numbers of hidden layers in: (<b>a</b>) training phase; (<b>b</b>) test phase.</p>
Full article ">
10 pages, 3083 KiB  
Article
Motion Signal Processing for a Remote Gas Metal Arc Welding Application
by Lucas Christoph Ebel, Patrick Zuther, Jochen Maass and Shahram Sheikhi
Robotics 2020, 9(2), 30; https://doi.org/10.3390/robotics9020030 - 1 May 2020
Cited by 1 | Viewed by 4144
Abstract
This article covers the signal processing for a human–robot remote controlled welding application. For this purpose, a test and evaluation system is under development. It allows a skilled worker to weld in real time without being exposed to the associated physical stress and [...] Read more.
This article covers the signal processing for a human–robot remote controlled welding application. For this purpose, a test and evaluation system is under development. It allows a skilled worker to weld in real time without being exposed to the associated physical stress and hazards. The torch movement of the welder in typical welding tasks is recorded by a stereoscopic sensor system. Due to a mismatch between the speed of the acquisition and the query rate for data by the robot control system, a prediction has to be developed. It should generate a suitable tool trajectory from the acquired data, which has to be a C 2 -continuous function. For this purpose, based on a frequency analysis, a Kalman-Filter in combination with a disturbance observer is applied. It reproduces the hand movement with sufficient accuracy and lag-free. The required algorithm is put under test on a real-time operating system based on Linux and Preempt_RT in connection to a KRC4 robot controller. By using this setup, the welding results in a plane are of good quality and the robot movement coincides with the manual movement sufficiently. Full article
(This article belongs to the Section Industrial Robots and Automation)
Show Figures

Figure 1

Figure 1
<p>Movement tracking setup with visual Feedback.</p>
Full article ">Figure 2
<p>Experimental setup with the welding gun and seam observation camera.</p>
Full article ">Figure 3
<p>Process chain for remote welding.</p>
Full article ">Figure 4
<p>Robot movement, <b>left</b>: measured data, <b>right</b>: created seam with experimental setup.</p>
Full article ">Figure 5
<p>Spectra of main axis motion components.</p>
Full article ">Figure 6
<p>Raw measured data of movement tracking.</p>
Full article ">Figure 7
<p>Prediction of the Kalman filter.</p>
Full article ">Figure 8
<p>Double integrator disturbance observer rearranged as a filter.</p>
Full article ">Figure 9
<p>Comparison of the presented approach to a second order IIR-filter.</p>
Full article ">Figure 10
<p>Successful application of the system on a complex welding application.</p>
Full article ">
24 pages, 2620 KiB  
Article
Adjustable and Adaptive Control for an Unstable Mobile Robot Using Imitation Learning with Trajectory Optimization
by Christian Dengler and Boris Lohmann
Robotics 2020, 9(2), 29; https://doi.org/10.3390/robotics9020029 - 25 Apr 2020
Cited by 2 | Viewed by 4671
Abstract
In this contribution, we develop a feedback controller in the form of a parametric function for a mobile inverted pendulum. The control both stabilizes the system and drives it to target positions with target orientations. A design of the controller based only on [...] Read more.
In this contribution, we develop a feedback controller in the form of a parametric function for a mobile inverted pendulum. The control both stabilizes the system and drives it to target positions with target orientations. A design of the controller based only on a cost function is difficult for this task, which is why we choose to train the controller using imitation learning on optimized trajectories. In contrast to popular approaches like policy gradient methods, this approach allows us to shape the behavior of the system by including equality constraints. When transferring the parametric controller from simulation to the real mobile inverted pendulum, the control performance is degraded due to the reality gap. A robust control design can reduce the degradation. However, for the framework of imitation learning on optimized trajectories, methods that explicitly consider robustness do not yet exist to the knowledge of the authors. We tackle this research gap by presenting a method to design a robust controller in the form of a recurrent neural network, to improve the transferability of the trained controller to the real system. As a last step, we make the behavior of the parametric controller adjustable to allow for the fine tuning of the behavior of the real system. We design the controller for our system and show in the application that the recurrent neural network has increased performance compared to a static neural network without robustness considerations. Full article
Show Figures

Figure 1

Figure 1
<p>Visual comparison of DAGGER, DART and DOI. <math display="inline"><semantics> <msub> <mi mathvariant="bold">x</mi> <mn>0</mn> </msub> </semantics></math> inidicates the intial state and <math display="inline"><semantics> <msub> <mi mathvariant="bold">x</mi> <mi>r</mi> </msub> </semantics></math> a reference state with low costs. The black line indicates a trajectory sampled using the parametric controller in the loop. Blue arrows indicate the training data created for each approach, possibly generated for a distribution of trajectories, indicated by a blurred area.</p>
Full article ">Figure 2
<p>The mobile inverted pendulum (MIP).</p>
Full article ">Figure 3
<p>Mean accumulated costs of oracle controllers <math display="inline"><semantics> <mrow> <mi mathvariant="bold">g</mi> <mo>(</mo> <mi mathvariant="bold">x</mi> <mo>,</mo> <mi mathvariant="bold">p</mi> <mo>)</mo> </mrow> </semantics></math> trained on different numbers of trajectories over <math display="inline"><semantics> <msub> <mi>N</mi> <mrow> <mi>e</mi> <mi>p</mi> <mi>o</mi> <mi>c</mi> <mi>h</mi> </mrow> </msub> </semantics></math> epochs. The number of trajectories used during training is given in the line label.</p>
Full article ">Figure 4
<p>Mean accumulated costs of the recurrent controller <math display="inline"><semantics> <mrow> <mi mathvariant="bold">r</mi> <mo>(</mo> <mi mathvariant="bold">x</mi> <mo>,</mo> <mi mathvariant="bold">h</mi> <mo>)</mo> </mrow> </semantics></math> trained using DOI over the number of epochs <math display="inline"><semantics> <msub> <mi>N</mi> <mrow> <mi>e</mi> <mi>p</mi> <mi>o</mi> <mi>c</mi> <mi>h</mi> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 5
<p>Measurement data for an application of a static neural network controller <math display="inline"><semantics> <mrow> <mi mathvariant="bold">g</mi> <mo>(</mo> <mi mathvariant="bold">x</mi> <mo>)</mo> </mrow> </semantics></math>. Units are in meters for the position coordinates <span class="html-italic">x</span> and <span class="html-italic">y</span> (top plot) and radian for <math display="inline"><semantics> <mi>γ</mi> </semantics></math> (bottom plot).</p>
Full article ">Figure 6
<p>Measurement data for an application of a recurrent neural network controller <math display="inline"><semantics> <mrow> <mi mathvariant="bold">r</mi> <mo>(</mo> <mi mathvariant="bold">x</mi> <mo>,</mo> <mi mathvariant="bold">h</mi> <mo>)</mo> </mrow> </semantics></math>. Units are in meters for the position coordinates <span class="html-italic">x</span> and <span class="html-italic">y</span> (top plot) and radian for <math display="inline"><semantics> <mi>γ</mi> </semantics></math> (bottom plot).</p>
Full article ">Figure 7
<p>Measurement data for an application of an adjustable recurrent neural network controller <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">r</mi> <mi>λ</mi> </msub> <mrow> <mo>(</mo> <mi mathvariant="bold">x</mi> <mo>,</mo> <mi>λ</mi> <mo>,</mo> <mi mathvariant="bold">h</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>0.3</mn> </mrow> </semantics></math>. Units are in meters for the position coordinates <span class="html-italic">x</span> and <span class="html-italic">y</span> (top plot) and radian for <math display="inline"><semantics> <mi>γ</mi> </semantics></math> (bottom plot).</p>
Full article ">Figure 8
<p>Image sequence showing a manoeuvre of the real MIP using the recurrent control structure <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">r</mi> <mi>λ</mi> </msub> <mrow> <mo>(</mo> <mi mathvariant="bold">x</mi> <mo>,</mo> <mi>λ</mi> <mo>,</mo> <mi mathvariant="bold">h</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>0.3</mn> </mrow> </semantics></math>. The top image shows the real system and attached below is a visualization of the measurement data (gray MIP), also showing the target position as a green MIP.</p>
Full article ">Figure 9
<p>Image sequence showing a manoeuvre of the MIP in simulation using the recurrent control structure <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">r</mi> <mi>λ</mi> </msub> <mrow> <mo>(</mo> <mi mathvariant="bold">x</mi> <mo>,</mo> <mi>λ</mi> <mo>,</mo> <mi mathvariant="bold">h</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>0.3</mn> </mrow> </semantics></math>. The target position is shown as a green MIP.</p>
Full article ">
22 pages, 13080 KiB  
Article
Wheeled Robot Dedicated to the Evaluation of the Technical Condition of Large-Dimension Engineering Structures
by Jarosław Domin, Marcin Górski, Ryszard Białecki, Jakub Zając, Krzysztof Grzyb, Paweł Kielan, Wojciech Adamczyk, Ziemowit Ostrowski, Paulina Wienchol, Kamil Lamkowski, Jakub Kamiński, Mateusz Doledutko and Radosław Rosiek
Robotics 2020, 9(2), 28; https://doi.org/10.3390/robotics9020028 - 20 Apr 2020
Cited by 3 | Viewed by 5073
Abstract
There are many reasons why engineering structures are at risk of losing their loading capacity during their long-term exploitation, which may lead to hazardous states. In such cases, structures must be strengthened. The most popular technique of strengthening is based on the use [...] Read more.
There are many reasons why engineering structures are at risk of losing their loading capacity during their long-term exploitation, which may lead to hazardous states. In such cases, structures must be strengthened. The most popular technique of strengthening is based on the use of composite materials—fiber-reinforced polymer (FRP) elements attached to the structure with the special resins. FRP elements are applied externally, often in hard to reach places, which makes it difficult to diagnose the durability and quality of such a connection. In this study, a combination of a modern thermographic method was proposed, which makes it possible to assess the degree of damage to the contact of the structure with the composite material along with the running platform (wheeled robot) equipped with a set of diagnostic sensors. The development potential of such a solution for subsequent projects was also indicated. Full article
(This article belongs to the Special Issue Advances in Inspection Robotic Systems)
Show Figures

Figure 1

Figure 1
<p>Examples of structural elements reinforced with composites (<b>a</b>–<b>c</b>) inside the building, (<b>d</b>–<b>f</b>) strengthening bridges [<a href="#B3-robotics-09-00028" class="html-bibr">3</a>].</p>
Full article ">Figure 2
<p>Pull-off method for diagnostics of reinforcements [<a href="#B3-robotics-09-00028" class="html-bibr">3</a>].</p>
Full article ">Figure 3
<p>3D view of a model including the defect location and texture: (<b>a</b>) defect view, (<b>b</b>) close-up view of spalling, (<b>c</b>) close-up view of crack [<a href="#B4-robotics-09-00028" class="html-bibr">4</a>].</p>
Full article ">Figure 4
<p>Examples of possible surface damage identification [<a href="#B5-robotics-09-00028" class="html-bibr">5</a>].</p>
Full article ">Figure 5
<p>(<b>a</b>) Beam cross-section showing fiber optic cable locations, (<b>b</b>) strain plot for beam [<a href="#B6-robotics-09-00028" class="html-bibr">6</a>].</p>
Full article ">Figure 6
<p>Location of potential degradation in fiber-reinforced polymer (FRP)-strengthened structural member [<a href="#B12-robotics-09-00028" class="html-bibr">12</a>].</p>
Full article ">Figure 7
<p>(<b>a</b>) schematic diagram of paired structured light (SL)-based robot module, (<b>b</b>) deformation measurement system using paired SL-based robots [<a href="#B16-robotics-09-00028" class="html-bibr">16</a>].</p>
Full article ">Figure 8
<p>Suction unit of diagnostic robot system (ALP) [<a href="#B17-robotics-09-00028" class="html-bibr">17</a>].</p>
Full article ">Figure 9
<p>Suction/travel mechanism [<a href="#B17-robotics-09-00028" class="html-bibr">17</a>].</p>
Full article ">Figure 10
<p>(<b>a</b>) Robot equipment and (<b>b</b>) stages of point cloud processing [<a href="#B18-robotics-09-00028" class="html-bibr">18</a>].</p>
Full article ">Figure 11
<p>Unmanned aerial vehicle (UAV) and the scope of the device’s applicability [<a href="#B19-robotics-09-00028" class="html-bibr">19</a>].</p>
Full article ">Figure 12
<p>General scheme of the unmanned aerial vehicle [<a href="#B20-robotics-09-00028" class="html-bibr">20</a>].</p>
Full article ">Figure 13
<p>Construction of the Polcevera Bridge in Genoa (the marked part has collapsed) [<a href="#B23-robotics-09-00028" class="html-bibr">23</a>].</p>
Full article ">Figure 14
<p>Transportation of the pedestrian bridge and collapsed structure [<a href="#B24-robotics-09-00028" class="html-bibr">24</a>].</p>
Full article ">Figure 15
<p>(<b>a</b>) Practical application of bonded FRP’s for slab strengthening. (<b>b</b>) debonding in the vicinity of the shear crack [<a href="#B25-robotics-09-00028" class="html-bibr">25</a>].</p>
Full article ">Figure 16
<p>Laboratory test rig configuration [<a href="#B8-robotics-09-00028" class="html-bibr">8</a>].</p>
Full article ">Figure 17
<p>The exemplary result of thermal diagnostic: (<b>a</b>) selected location for retrieving thermal resistance of the connection layer, (<b>b</b>) temperature profile recorded at selected locations [<a href="#B8-robotics-09-00028" class="html-bibr">8</a>].</p>
Full article ">Figure 18
<p>The scheme presenting the idea of the operation of the mobile platform robot dedicated to the evaluation of the technical condition of building large structures using thermographic techniques.</p>
Full article ">Figure 19
<p>The main frame of the mobile platform (CAD model): 1,2,3—first, second and last module (floor).</p>
Full article ">Figure 20
<p>The platform suspension—CAD model.</p>
Full article ">Figure 21
<p>The heating module (CAD model) and its location, where: 1—a halogen lamp, 2—a shutter, 3—a reflector.</p>
Full article ">Figure 22
<p>Main block diagram of the control and measurement system.</p>
Full article ">Figure 23
<p>Connection diagram of the slave subsystem with a set of external components of the robot platform.</p>
Full article ">Figure 24
<p>The real view of the control software panel—system dedicated for inspection and diagnostics of structures using thermography techniques.</p>
Full article ">Figure 25
<p>The concrete slab (without FRP-strengthening) used for tests: (<b>a</b>) real view, (<b>b</b>) the damages shown in illustratively way.</p>
Full article ">Figure 26
<p>Concrete slab (without FRP-strengthening, with additional damages) used for tests: (<b>a</b>) real view, (<b>b</b>) the damages shown in illustratively way.</p>
Full article ">Figure 27
<p>The concrete slab (with FRP-strengthening) used for tests.</p>
Full article ">Figure 28
<p>The real views of the mobile platform during laboratory tests.</p>
Full article ">Figure 29
<p>Thermographic image of the tested area number 1: (<b>a</b>) view during heating process, (<b>b</b>) view after heating process completion.</p>
Full article ">Figure 30
<p>The thermographic image of the tested area number 2: (<b>a</b>) view during heating process, (<b>b</b>) view after heating process completion.</p>
Full article ">Figure 31
<p>The thermographic image of the tested area number 3: (<b>a</b>) view during heating process, (<b>b</b>) view after heating process completion.</p>
Full article ">
13 pages, 4513 KiB  
Article
An Analysis of Joint Assembly Geometric Errors Affecting End-Effector for Six-Axis Robots
by Chana Raksiri, Krittiya Pa-im and Supasit Rodkwan
Robotics 2020, 9(2), 27; https://doi.org/10.3390/robotics9020027 - 17 Apr 2020
Cited by 5 | Viewed by 5595
Abstract
This paper presents an analysis of the geometric errors of joint assembly that affect the end-effector for a six-axis industrial robot. The errors were composed of 30 parameters that come from the Geometric Dimensioning and Tolerancing (GD&T) design, which is not the normal [...] Read more.
This paper presents an analysis of the geometric errors of joint assembly that affect the end-effector for a six-axis industrial robot. The errors were composed of 30 parameters that come from the Geometric Dimensioning and Tolerancing (GD&T) design, which is not the normal way to describe them. Three types of manufacturing tolerancing—perpendicularity, parallelism and position—were introduced and investigated. These errors were measured by the laser tracker. The measurement data were calculated with an analysis of the circle fitting method. The kinematic model and error model based on a combination of translations methods were used. The experiment was carried out in order to calculate the tolerancing of geometric error. Then, the positions of the end-effector in the actual measurement from laser tracker and exact performance were compared. The discrepancy was compensated by offline programming. As a result, the position errors were reduced by 90%. Full article
(This article belongs to the Section Industrial Robots and Automation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Schematic diagram of KUKA robot as: (<b>a</b>) KUKA KR5 schematic model; (<b>b</b>) KUKA KR5 nominal dimension.</p>
Full article ">Figure 2
<p>Frame displacement due to errors.</p>
Full article ">Figure 3
<p>Tolerancing Analysis. (<b>a</b>) Tolerancing as define in detail drawing; (<b>b</b>) Tolerancing direction.</p>
Full article ">Figure 4
<p>Experiment setup.</p>
Full article ">Figure 5
<p>The seven directions measurement.</p>
Full article ">Figure 6
<p>Data set of the points captured in each joint.</p>
Full article ">Figure 7
<p>A represented circle, <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </semantics></math>, of the nominal and measuring circle.</p>
Full article ">Figure 8
<p>Nominal model error and real measurement comparison.</p>
Full article ">Figure 9
<p>Position error of major direction.</p>
Full article ">Figure 10
<p>Position error of 2d diagonal direction.</p>
Full article ">Figure 11
<p>Position error of 3d diagonal direction.</p>
Full article ">
16 pages, 1368 KiB  
Article
A Pressing Attachment Approach for a Wall-Climbing Robot Utilizing Passive Suction Cups
by Dingxin Ge, Yongchen Tang, Shugen Ma, Takahiro Matsuno and Chao Ren
Robotics 2020, 9(2), 26; https://doi.org/10.3390/robotics9020026 - 13 Apr 2020
Cited by 27 | Viewed by 8258
Abstract
This paper proposes a pressing method for wall-climbing robots to prevent them from falling. In order to realize the method, the properties of the utilized suction cup are studied experimentally. Then based on the results, a guide rail is designed to distribute the [...] Read more.
This paper proposes a pressing method for wall-climbing robots to prevent them from falling. In order to realize the method, the properties of the utilized suction cup are studied experimentally. Then based on the results, a guide rail is designed to distribute the attached suction cup force and implement the pressing method. A prototype of a wall-climbing robot that utilizes passive suction cups and one motor is used to demonstrate the proposed method. An experimental test-bed is designed to measure the force changes of the suction cup when the robot climbs upwards. The experimental results validate that the suction cup can completely attach to the surface by the proposed method, and demonstrate that the robot can climb upwards without falling. Full article
Show Figures

Figure 1

Figure 1
<p>Pitch-up falling problem of previous wall-climbing robots.</p>
Full article ">Figure 2
<p>The cutaway view of the CAD model of the proposed wall-climbing robot.</p>
Full article ">Figure 3
<p>Free body force analysis of a wall-climbing robot in the two cases. (<b>a</b>) and (<b>b</b>) show the total numbers of attached suction cups, which are <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>i</mi> <mo>−</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </semantics></math>, respectively.</p>
Full article ">Figure 4
<p>Schematic of the proposed experimental test-bed.</p>
Full article ">Figure 5
<p>(<b>a</b>) Pulled suction cup. (<b>b</b>) Cross-section of the pulled suction cup.</p>
Full article ">Figure 6
<p>(<b>a</b>) Natural status of the suction cup. (<b>b</b>) Pressing status of the suction cup. (<b>c</b>) Pulling status of the suction cup.</p>
Full article ">Figure 7
<p>Relationship between pressing force and detachment force.</p>
Full article ">Figure 8
<p>Experimental results of the measured relationship between force and displacement at different speeds. These experimental results were obtained with a pressing force of 11 N.</p>
Full article ">Figure 9
<p>Experimental results of the calculated relationship between force and displacement of the suction cup. These experimental results were obtained with a pressing force of 11 N.</p>
Full article ">Figure 10
<p>The proposed guide rail. (<b>a</b>) The proposed guide rail function areas. (<b>b</b>) Prototype of the proposed guide rail.</p>
Full article ">Figure 11
<p>A suction cup preparing to attach. (<b>a</b>) With a pre-turning angle. (<b>b</b>) Moving posture from (<b>a</b>). (<b>c</b>) Without a pre-turning angle. (<b>d</b>) Moving posture from (c).</p>
Full article ">Figure 12
<p>Prototype of proposed wall-climbing robot.</p>
Full article ">Figure 13
<p>A force measurement system. (<b>a</b>) Prototype of the proposed experimental setup. (<b>b</b>) Schematic design of the proposed experimental setup.</p>
Full article ">Figure 14
<p>Experiment methodology.</p>
Full article ">Figure 15
<p>Force of the suction cup versus climbing displacement of the robot.</p>
Full article ">Figure 16
<p>Proposed robot climbing on different material surfaces. (<b>a</b>) Robot climbing on transparent acrylic. (<b>b</b>) Robot climbing on an iron gate. (<b>c</b>) Robot climbing on a white board. (<b>d</b>) Robot climbing on window glass.</p>
Full article ">
19 pages, 665 KiB  
Article
An ACT-R Based Humanoid Social Robot to Manage Storytelling Activities
by Adriana Bono, Agnese Augello, Giovanni Pilato, Filippo Vella and Salvatore Gaglio
Robotics 2020, 9(2), 25; https://doi.org/10.3390/robotics9020025 - 12 Apr 2020
Cited by 12 | Viewed by 6370
Abstract
This paper describes an interactive storytelling system, accessible through the SoftBank robotic platforms NAO and Pepper. The main contribution consists of the interpretation of the story characters by humanoid robots, obtained through the definition of appropriate cognitive models, relying on the ACT-R cognitive [...] Read more.
This paper describes an interactive storytelling system, accessible through the SoftBank robotic platforms NAO and Pepper. The main contribution consists of the interpretation of the story characters by humanoid robots, obtained through the definition of appropriate cognitive models, relying on the ACT-R cognitive architecture. The reasoning processes leading to the story evolution are based on the represented knowledge and the suggestions of the listener in critical points of the story. They are disclosed during the narration, to make clear the dynamics of the story and the feelings of the characters. We analyzed the impact of such externalization of the internal status of the characters to set the basis for future experimentation with primary school children. Full article
Show Figures

Figure 1

Figure 1
<p>A representation of the ACT-R architecture.</p>
Full article ">Figure 2
<p>Freytag’s Pyramid.</p>
Full article ">Figure 3
<p>Cognitive model of M1: Status A.</p>
Full article ">Figure 4
<p>Cognitive model of M1: Status B.</p>
Full article ">Figure 5
<p>Selection Window.</p>
Full article ">Figure 6
<p>Human-Robot interaction.</p>
Full article ">Figure 7
<p>Choices selection through the Pepper’s tablet.</p>
Full article ">Figure 8
<p>Distribution of credibility for the two kinds of interaction with the explanation of the internal reasoning of the characters (<b>left</b>) and without it (<b>right</b>).</p>
Full article ">Figure 9
<p>Distribution of performance for the two kinds of interaction with the explanation of the internal reasoning of the characters (<b>left</b>) and without it (<b>right</b>).</p>
Full article ">
21 pages, 12107 KiB  
Article
Experimental Testing of Bandstop Wave Filter to Mitigate Wave Reflections in Bilateral Teleoperation
by Isaac O. Ogunrinde, Collins F. Adetu, Carl A. Moore, Jr., Rodney G. Roberts and Keimargeo McQueen
Robotics 2020, 9(2), 24; https://doi.org/10.3390/robotics9020024 - 11 Apr 2020
Cited by 4 | Viewed by 4203
Abstract
A bilateral teleoperation system can become unstable in the presence of a modest time delay. However, the wave variable algorithm provides stable operation for any fixed time delay using passivity arguments. Unfortunately, the wave variable method produces wave reflection that can degrade teleoperation [...] Read more.
A bilateral teleoperation system can become unstable in the presence of a modest time delay. However, the wave variable algorithm provides stable operation for any fixed time delay using passivity arguments. Unfortunately, the wave variable method produces wave reflection that can degrade teleoperation performance when a mismatched impedance exists between the master and slave robot. In this work, we develop a novel bandstop wave filter and experimentally verify that the technique can mitigate the effects of wave reflections in bilaterally teleoperated systems. We apply the bandstop wave filter in the wave domain and filtered the wave signal along the communication channel. We placed the bandstop wave filter in the master-to-slave robot path to alleviate lower frequency components of the reflected signal. With the lower frequency components reduced, wave reflections that degrade teleoperation performance were mitigated and we obtained a better transient response from the system. Results from our experiment show that the bandstop wave filter performed better by 67% when compared to the shaping wave filter respectively. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>A communication channel with wave variables across it.</p>
Full article ">Figure 2
<p>(<b>a</b>) Plot showing the response of the master and slave robots with 100 ms delay before using wave variable (<b>b</b>) wave reflections with wave variable implemented at a time delay of 100 ms.</p>
Full article ">Figure 3
<p>Paths traveled by wave signal when the wave variable is employed in a bilaterally teleoperated system.</p>
Full article ">Figure 4
<p>Comparing FFT of the feedback force for masses 0.1 kg and 0.5 kg with 100 ms time delay (Simulations) (<b>a</b>) wave variable implemented, and (<b>b</b>) shaping wave filter implemented.</p>
Full article ">Figure 5
<p>Bandstop wave filter and traditional wave variable method implemented in a bilateral teleoperation system.</p>
Full article ">Figure 6
<p>Comparing FFT of the feedback force for masses 0.1 kg and 0.5 kg with 100 ms time delay (simulation) (<b>a</b>) the shaping wave filter (<b>b</b>) bandstop wave filter.</p>
Full article ">Figure 7
<p>Comparing FFT of the feedback force for masses 0.1 kg and 0.5 kg with 500 ms time delay (simulation). (<b>a</b>) the wave variable (<b>b</b>) the shaping wave filter (<b>c</b>) bandstop wave filter.</p>
Full article ">Figure 7 Cont.
<p>Comparing FFT of the feedback force for masses 0.1 kg and 0.5 kg with 500 ms time delay (simulation). (<b>a</b>) the wave variable (<b>b</b>) the shaping wave filter (<b>c</b>) bandstop wave filter.</p>
Full article ">Figure 8
<p>Comparing FFT of the feedback force for masses 0.1 kg and 0.5 kg with 5 s time delay (simulation). (<b>a</b>) the wave variable (<b>b</b>) the shaping wave filter (<b>c</b>) bandstop wave filter.</p>
Full article ">Figure 8 Cont.
<p>Comparing FFT of the feedback force for masses 0.1 kg and 0.5 kg with 5 s time delay (simulation). (<b>a</b>) the wave variable (<b>b</b>) the shaping wave filter (<b>c</b>) bandstop wave filter.</p>
Full article ">Figure 9
<p>Experiment setup showing the Force Dimension Sigma.7 (<b>Left</b>) and Omega.3 (<b>Right</b>) robots as slave and master devices respectively.</p>
Full article ">Figure 10
<p>Slave device response to a step reference input from the master device.</p>
Full article ">Figure 11
<p>Experimental results for the real master and real slave robot of the mass of 0.1 kg at a time delay of 100 ms with the position, velocity and force plotted (<b>a</b>) the traditional wave variable(<b>b</b>) the shaping wave filter (<b>c</b>) the bandstop filter.</p>
Full article ">Figure 12
<p>Experimental results for the real master and real slave robot of the mass of 0.1 kg at a time delay of 500 ms with the position, velocity and force plotted (<b>a</b>) the traditional wave variable (<b>b</b>) the shaping wave filter (<b>c</b>) the bandstop filter.</p>
Full article ">Figure 13
<p>Experimental results for the real master and real slave robot of the mass of 0.5 kg at a time delay of 100 ms with the position, velocity and force plotted (<b>a</b>) the traditional wave variable (<b>b</b>) the shaping wave filter (<b>c</b>) the bandstop wave filter.</p>
Full article ">Figure 14
<p>Experimental results for the real master and real slave robot of the mass of 0.5 kg at a time delay of 500 ms with the position, velocity and force plotted (<b>a</b>) the traditional wave variable (<b>b</b>) the shaping wave filter (<b>c</b>) the bandstop wave filter.</p>
Full article ">Figure 15
<p>Comparing the repeatability of experimental results from the bandstop wave filter with shaping wave filter for a slave mass of 0.1 kg at different time delays.</p>
Full article ">
12 pages, 3922 KiB  
Article
Static Balancing of Wheeled-legged Hexapod Robots
by Ernesto Christian Orozco-Magdaleno, Daniele Cafolla, Eduardo Castillo-Castaneda and Giuseppe Carbone
Robotics 2020, 9(2), 23; https://doi.org/10.3390/robotics9020023 - 7 Apr 2020
Cited by 21 | Viewed by 7202
Abstract
Locomotion over different terrain types, whether flat or uneven, is very important for a wide range of service operations in robotics. Potential applications range from surveillance, rescue, or hospital assistance. Wheeled-legged hexapod robots have been designed to solve these locomotion tasks. Given the [...] Read more.
Locomotion over different terrain types, whether flat or uneven, is very important for a wide range of service operations in robotics. Potential applications range from surveillance, rescue, or hospital assistance. Wheeled-legged hexapod robots have been designed to solve these locomotion tasks. Given the wide range of feasible operations, one of the key operation planning issues is related to the robot balancing during motion tasks. Usually this problem is related with the pose of the robot’s center of mass, which can be addressed using different mathematical techniques. This paper proposes a new practical technique for balancing wheeled-legged hexapod robots, where a Biodex Balance System model SD (for static & dynamic) is used to obtain the effective position of the center of mass, thus it can be recalculated to its optimal position. Experimental tests are carried out to evaluate the effectiveness of this technique and modify and improve the position of hexapod robots’ center of mass. Full article
(This article belongs to the Special Issue Advances in Robotics and Mechatronics)
Show Figures

Figure 1

Figure 1
<p>Kinematic diagram of the wheeled-legged hexapod robot: (<b>a</b>) full robot; (<b>b</b>) detail of one leg.</p>
Full article ">Figure 2
<p>Wheeled-legged robot: (<b>a</b>) photograph of the robot; (<b>b</b>) one hybrid leg.</p>
Full article ">Figure 3
<p>Schematic diagram of the control hardware of the wheeled-legged hexapod robot.</p>
Full article ">Figure 4
<p>Stability margin and support pattern in a legged locomotion of a hexapod robot.</p>
Full article ">Figure 5
<p>The Biodex Balance System model SD (for static &amp; dynamic): (<b>a</b>) primary components and adjustment mechanisms; (<b>b</b>) postural stability testing screen [<a href="#B28-robotics-09-00023" class="html-bibr">28</a>].</p>
Full article ">Figure 6
<p>The wheeled-legged hexapod robot located at the center of the foot platform.</p>
Full article ">Figure 7
<p>Results of experimental tests: (<b>a</b>) test 1; (<b>b</b>) test 2.</p>
Full article ">Figure 8
<p>Schematic diagram of the unbalancing of the robot.</p>
Full article ">Figure 9
<p>Schematic diagram of the static model of the wheeled-legged hexapod robot.</p>
Full article ">Figure 10
<p>Free-body diagrams: (<b>a</b>) schematics of full hexapod robot; (<b>b</b>) a detail of a single leg in fully stretched configuration.</p>
Full article ">Figure 11
<p>Compensation of the center of mass: (<b>a</b>) proposed, (<b>b</b>) real.</p>
Full article ">Figure 12
<p>Experimental tests with the wheeled-legged hexapod robot: (<b>a</b>) layout of the experimental test; (<b>b</b>) implementation of sensors over the robot.</p>
Full article ">Figure 13
<p>Photo sequence of the tested path with unbalancing: (<b>a1</b>) t = 0 s; (<b>a2</b>) t = 3 s; (<b>a3</b>) t = 6 s; (<b>a4</b>) t = 9 s; (<b>a5</b>) t = 12 s.</p>
Full article ">Figure 14
<p>Photo sequence of the tested path with balancing: (<b>a1</b>) t = 0 s; (<b>a2</b>) t = 3 s; (<b>a3</b>) t = 6 s; (<b>a4</b>) t = 9 s; (<b>a5</b>) t = 12 s.</p>
Full article ">Figure 15
<p>Experimental tests results, comparison between target and real displacement: (<b>a</b>) unbalanced; (<b>b</b>) balanced.</p>
Full article ">
20 pages, 1939 KiB  
Article
Evaluation of Hunting-Based Optimizers for a Quadrotor Sliding Mode Flight Controller
by Josenalde Oliveira, Paulo Moura Oliveira, José Boaventura-Cunha and Tatiana Pinho
Robotics 2020, 9(2), 22; https://doi.org/10.3390/robotics9020022 - 7 Apr 2020
Cited by 4 | Viewed by 4206
Abstract
The design of Multi-Input Multi-Output nonlinear control systems for a quadrotor can be a difficult task. Nature inspired optimization techniques can greatly improve the design of non-linear control systems. Two recently proposed hunting-based swarm intelligence inspired techniques are the Grey Wolf Optimizer (GWO) [...] Read more.
The design of Multi-Input Multi-Output nonlinear control systems for a quadrotor can be a difficult task. Nature inspired optimization techniques can greatly improve the design of non-linear control systems. Two recently proposed hunting-based swarm intelligence inspired techniques are the Grey Wolf Optimizer (GWO) and the Ant Lion Optimizer (ALO). This paper proposes the use of both GWO and ALO techniques to design a Sliding Mode Control (SMC) flight system for tracking improvement of altitude and attitude in a quadrotor dynamic model. SMC is a nonlinear technique which requires that its strictly coupled parameters related to continuous and discontinuous components be correctly adjusted for proper operation. This requires minimizing the tracking error while keeping the chattering effect and control signal magnitude within suitable limits. The performance achieved with both GWO and ALO, considering realistic disturbed flight scenarios are presented and compared to the classical Particle Swarm Optimization (PSO) algorithm. Simulated results are presented showing that GWO and ALO outperformed PSO in terms of precise tracking, for ideal and disturbed conditions. It is shown that the higher stochastic nature of these hunting-based algorithms provided more confidence in local optima avoidance, suggesting feasibility of getting a more precise tracking for practical use. Full article
Show Figures

Figure 1

Figure 1
<p>Quadrotor representation.</p>
Full article ">Figure 2
<p>Example of bad tuning effect (chattering) on input signals.</p>
Full article ">Figure 3
<p>General trajectory used for optimization.</p>
Full article ">Figure 4
<p>Convergence curves: (<b>a</b>) best fitness value; (<b>b</b>) average mean fitness value.</p>
Full article ">Figure 5
<p>Example of SMC-based equivalent control method to reduce chattering.</p>
Full article ">Figure 6
<p>Flight plan 1 (zoom)—non-disturbed case.</p>
Full article ">Figure 7
<p>Flight plan 1—non-disturbed case.</p>
Full article ">Figure 8
<p>Flight plan 2—zoom of the effect of the parameter variation inserted at <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>80</mn> <mi>s</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Flight plan 2—parametric variation.</p>
Full article ">Figure 10
<p>Flight plan 3—input disturbance in <math display="inline"><semantics> <msub> <mi>U</mi> <mn>1</mn> </msub> </semantics></math> and motor failure in <math display="inline"><semantics> <msub> <mi>U</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 11
<p>Flight plan 3—input disturbance and motor failure.</p>
Full article ">Figure 12
<p>Flight plan 4 (zoom)—effect of measurement noise on the height control.</p>
Full article ">Figure 13
<p>Flight plan 4—measurement noise.</p>
Full article ">
28 pages, 1948 KiB  
Review
Augmented Reality for Robotics: A Review
by Zhanat Makhataeva and Huseyin Atakan Varol
Robotics 2020, 9(2), 21; https://doi.org/10.3390/robotics9020021 - 2 Apr 2020
Cited by 181 | Viewed by 34791
Abstract
Augmented reality (AR) is used to enhance the perception of the real world by integrating virtual objects to an image sequence acquired from various camera technologies. Numerous AR applications in robotics have been developed in recent years. The aim of this paper is [...] Read more.
Augmented reality (AR) is used to enhance the perception of the real world by integrating virtual objects to an image sequence acquired from various camera technologies. Numerous AR applications in robotics have been developed in recent years. The aim of this paper is to provide an overview of AR research in robotics during the five year period from 2015 to 2019. We classified these works in terms of application areas into four categories: (1) Medical robotics: Robot-Assisted surgery (RAS), prosthetics, rehabilitation, and training systems; (2) Motion planning and control: trajectory generation, robot programming, simulation, and manipulation; (3) Human-robot interaction (HRI): teleoperation, collaborative interfaces, wearable robots, haptic interfaces, brain-computer interfaces (BCIs), and gaming; (4) Multi-agent systems: use of visual feedback to remotely control drones, robot swarms, and robots with shared workspace. Recent developments in AR technology are discussed followed by the challenges met in AR due to issues of camera localization, environment mapping, and registration. We explore AR applications in terms of how AR was integrated and which improvements it introduced to corresponding fields of robotics. In addition, we summarize the major limitations of the presented applications in each category. Finally, we conclude our review with future directions of AR research in robotics. The survey covers over 100 research works published over the last five years. Full article
Show Figures

Figure 1

Figure 1
<p>Milgram’s reality–virtuality continuum (Has been adapted from ([<a href="#B5-robotics-09-00021" class="html-bibr">5</a>,<a href="#B23-robotics-09-00021" class="html-bibr">23</a>]).</p>
Full article ">Figure 2
<p>Historical trends of the Mixed Reality (MR), Augmented Reality (AR), and Virtual Reality (VR) keywords in the papers indexed by the Scopus database.</p>
Full article ">Figure 3
<p>Illustrations of the three classes of AR technology.</p>
Full article ">Figure 4
<p>AR systems in RAS: (<b>a</b>) Visualization of transparent body phantom in ARssist [<a href="#B55-robotics-09-00021" class="html-bibr">55</a>], (<b>b</b>,<b>c</b>) Examples of AR-based visualisation of endoscopy in ARssist [<a href="#B59-robotics-09-00021" class="html-bibr">59</a>].</p>
Full article ">Figure 5
<p>Historical trends of the MR, AR, and VR keywords in the papers indexed by the Scopus database within the field of medical robotics.</p>
Full article ">Figure 6
<p>AR in teleoperation and robot motion planning: (<b>a</b>) AR-based teleoperation of maintenance robot [<a href="#B38-robotics-09-00021" class="html-bibr">38</a>], (<b>b</b>) AR-based visual feedback on the computer screen [<a href="#B69-robotics-09-00021" class="html-bibr">69</a>], (<b>c</b>) virtual planning in AR with a 3D CAD model of the robot and teach pendant [<a href="#B72-robotics-09-00021" class="html-bibr">72</a>].</p>
Full article ">Figure 7
<p>Historical trends of the MR, AR, and VR keywords in the papers indexed by the Scopus database within the field of robot control and planning.</p>
Full article ">Figure 8
<p>AR in human–robot collaboration: (<b>a</b>) AR hardware setup for remote maintenance [<a href="#B99-robotics-09-00021" class="html-bibr">99</a>], (<b>b</b>) RoMA set up for 3D printing [<a href="#B40-robotics-09-00021" class="html-bibr">40</a>], (<b>c</b>) HRI setup for the visualization of safe and danger zones around a robot, d) Safety aura visualization around the robot [<a href="#B100-robotics-09-00021" class="html-bibr">100</a>].</p>
Full article ">Figure 9
<p>Historical trends of the MR, AR, and VR keywords in the papers indexed by the Scopus database within the field of human–robot interaction.</p>
Full article ">Figure 10
<p>AR for robot swarms: (<b>a</b>) Simulated environment with virtual sensing technology in ARGoS (<b>left</b>), aerial view of real environment (<b>right</b>) [<a href="#B115-robotics-09-00021" class="html-bibr">115</a>]. Multi-projector system of MAR-CPS: (<b>b</b>) interaction between ground vehicles and drone and (<b>c</b>) detection of the vehicle by drone [<a href="#B116-robotics-09-00021" class="html-bibr">116</a>], (<b>d</b>) Experiment with 50 Kilobots in simulation [<a href="#B117-robotics-09-00021" class="html-bibr">117</a>].</p>
Full article ">Figure 11
<p>Historical trends of the MR, AR, and VR keywords in the papers indexed by the Scopus database within the field of swarm robotics.</p>
Full article ">
22 pages, 7713 KiB  
Article
Classification of All Non-Isomorphic Regular and Cuspidal Arm Anatomies in an Orthogonal Metamorphic Manipulator
by Christos Koukos-Papagiannis, Vassilis Moulianitis and Nikos Aspragathos
Robotics 2020, 9(2), 20; https://doi.org/10.3390/robotics9020020 - 2 Apr 2020
Cited by 2 | Viewed by 5506
Abstract
This paper proposes a classification of all non-isomorphic anatomies of an orthogonal metamorphic manipulator according to the topology of workspace considering cusps and nodes. Using symbolic algebra, a general kinematics polynomial equation is formulated, and the closed-form parametric solution of the inverse kinematics [...] Read more.
This paper proposes a classification of all non-isomorphic anatomies of an orthogonal metamorphic manipulator according to the topology of workspace considering cusps and nodes. Using symbolic algebra, a general kinematics polynomial equation is formulated, and the closed-form parametric solution of the inverse kinematics is obtained for the coming anatomies. The metamorphic design space was disjointed into eight distinct subspaces with the same number of cusps and nodes plotting the bifurcating and strict surfaces in a cartesian coordinate system { θ π 1 , θ π 2 , d 4 } . In addition, several non-singular, smooth and continuous trajectories are simulated to show the importance of this classification. Full article
(This article belongs to the Special Issue Advances in Robotics and Mechatronics)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Active rotational module. (<b>b</b>)Versatile passive joint (pseudo-joint) connector constructed with aluminium and 13 discrete angular positions.</p>
Full article ">Figure 2
<p>(<b>a</b>) Orthogonal metamorphic mechanism with local coordinates systems, (<b>b</b>) Side view of passive joint with 13 possible discrete angular positions in <math display="inline"><semantics> <mrow> <mrow> <mo>[</mo> <mrow> <mrow> <mo>−</mo> <mn>90</mn> <mo>°</mo> <mo>,</mo> <mn>90</mn> <mo>°</mo> </mrow> </mrow> <mo>]</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Surfaces in 3D design metamorphic parameter space and 4 distinct subspaces with the same number of cusp points.</p>
Full article ">Figure 4
<p>Selective arm anatomies of the metamorphic structure and singularities are displayed in joint space and half cross section of workspace with the metamorphic parameters: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>1</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>90</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>2</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>90</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="normal">d</mi> <mn>4</mn> </msub> <mrow> <mo>=</mo> <mn>0.23</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math> (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>1</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>15</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>2</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>90</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="normal">d</mi> <mn>4</mn> </msub> <mrow> <mo>=</mo> <mn>0.6</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math> (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>1</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>60</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>2</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>30</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="normal">d</mi> <mn>4</mn> </msub> <mrow> <mo>=</mo> <mn>0.6</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> <mo>.</mo> </mrow> </semantics></math></p>
Full article ">Figure 4 Cont.
<p>Selective arm anatomies of the metamorphic structure and singularities are displayed in joint space and half cross section of workspace with the metamorphic parameters: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>1</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>90</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>2</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>90</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="normal">d</mi> <mn>4</mn> </msub> <mrow> <mo>=</mo> <mn>0.23</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math> (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>1</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>15</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>2</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>90</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="normal">d</mi> <mn>4</mn> </msub> <mrow> <mo>=</mo> <mn>0.6</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math> (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>1</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>60</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>2</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>30</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="normal">d</mi> <mn>4</mn> </msub> <mrow> <mo>=</mo> <mn>0.6</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> <mo>.</mo> </mrow> </semantics></math></p>
Full article ">Figure 5
<p>Separating surfaces in 3D design metamorphic parameter space and 8 distinct subspaces with the same number of cusps and nodes. In every subspace, the first and the second number in the parenthesis indicates the number of cusps and nodes respectively.</p>
Full article ">Figure 6
<p>Selective arm anatomies of the metamorphic structure and singularities are displayed in joint space and half cross-section of workspace with the metamorphic parameters: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>1</mn> </msub> </mrow> </msub> <msub> <mrow> <mrow> <mo>=</mo> <mo>±</mo> <mn>75</mn> <mo>°</mo> <mo>,</mo> <mi mathvariant="sans-serif">θ</mi> </mrow> </mrow> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>2</mn> </msub> </mrow> </msub> <msub> <mrow> <mrow> <mo>=</mo> <mo>±</mo> <mn>90</mn> <mo>°</mo> <mo>,</mo> <mi mathvariant="normal">d</mi> </mrow> </mrow> <mn>4</mn> </msub> <mrow> <mo>=</mo> <mn>0.07</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math> (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>1</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>45</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>2</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>60</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="normal">d</mi> <mn>4</mn> </msub> <mrow> <mo>=</mo> <mn>0.1</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math> (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>1</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>15</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>2</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>30</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="normal">d</mi> <mn>4</mn> </msub> <mrow> <mo>=</mo> <mn>0.11</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> <mo>.</mo> </mrow> </semantics></math></p>
Full article ">Figure 6 Cont.
<p>Selective arm anatomies of the metamorphic structure and singularities are displayed in joint space and half cross-section of workspace with the metamorphic parameters: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>1</mn> </msub> </mrow> </msub> <msub> <mrow> <mrow> <mo>=</mo> <mo>±</mo> <mn>75</mn> <mo>°</mo> <mo>,</mo> <mi mathvariant="sans-serif">θ</mi> </mrow> </mrow> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>2</mn> </msub> </mrow> </msub> <msub> <mrow> <mrow> <mo>=</mo> <mo>±</mo> <mn>90</mn> <mo>°</mo> <mo>,</mo> <mi mathvariant="normal">d</mi> </mrow> </mrow> <mn>4</mn> </msub> <mrow> <mo>=</mo> <mn>0.07</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math> (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>1</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>45</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>2</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>60</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="normal">d</mi> <mn>4</mn> </msub> <mrow> <mo>=</mo> <mn>0.1</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math> (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>1</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>15</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>2</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>30</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="normal">d</mi> <mn>4</mn> </msub> <mrow> <mo>=</mo> <mn>0.11</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> <mo>.</mo> </mrow> </semantics></math></p>
Full article ">Figure 7
<p>Selective arm anatomies of the metamorphic structure and singularities are displayed in joint space and half cross-section of workspace with design parameters: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>1</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>30</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>2</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>90</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="normal">d</mi> <mn>4</mn> </msub> <mrow> <mo>=</mo> <mn>0.4</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math> (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>1</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>75</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>2</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>15</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="normal">d</mi> <mn>4</mn> </msub> <mrow> <mo>=</mo> <mn>0.4</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math> (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>1</mn> </msub> </mrow> </msub> <msub> <mrow> <mrow> <mo>=</mo> <mn>0</mn> <mo>°</mo> <mo>,</mo> <mi mathvariant="sans-serif">θ</mi> </mrow> </mrow> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>2</mn> </msub> </mrow> </msub> <msub> <mrow> <mrow> <mo>=</mo> <mn>0</mn> <mo>°</mo> <mo>,</mo> <mi mathvariant="normal">d</mi> </mrow> </mrow> <mrow> <mrow> <mn>4</mn> </mrow> </mrow> </msub> <mrow> <mo>=</mo> <mn>0.2</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math> (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>1</mn> </msub> </mrow> </msub> <msub> <mrow> <mrow> <mo>=</mo> <mo>±</mo> <mn>90</mn> <mo>°</mo> <mo>,</mo> <mi mathvariant="sans-serif">θ</mi> </mrow> </mrow> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>2</mn> </msub> </mrow> </msub> <msub> <mrow> <mrow> <mo>=</mo> <mn>0</mn> <mo>°</mo> <mo>,</mo> <mi mathvariant="normal">d</mi> </mrow> </mrow> <mn>4</mn> </msub> <mrow> <mo>=</mo> <mn>0.2</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math> (<b>e</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>1</mn> </msub> </mrow> </msub> <msub> <mrow> <mrow> <mo>=</mo> <mn>0</mn> <mo>°</mo> <mo>,</mo> <mi mathvariant="sans-serif">θ</mi> </mrow> </mrow> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>2</mn> </msub> </mrow> </msub> <msub> <mrow> <mrow> <mo>=</mo> <mo>±</mo> <mn>30</mn> <mo>°</mo> <mo>,</mo> <mi mathvariant="normal">d</mi> </mrow> </mrow> <mrow> <mrow> <mn>4</mn> </mrow> </mrow> </msub> <mrow> <mo>=</mo> <mn>0.2</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> <mo>.</mo> </mrow> </semantics></math></p>
Full article ">Figure 7 Cont.
<p>Selective arm anatomies of the metamorphic structure and singularities are displayed in joint space and half cross-section of workspace with design parameters: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>1</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>30</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>2</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>90</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="normal">d</mi> <mn>4</mn> </msub> <mrow> <mo>=</mo> <mn>0.4</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math> (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>1</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>75</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>2</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>15</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="normal">d</mi> <mn>4</mn> </msub> <mrow> <mo>=</mo> <mn>0.4</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math> (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>1</mn> </msub> </mrow> </msub> <msub> <mrow> <mrow> <mo>=</mo> <mn>0</mn> <mo>°</mo> <mo>,</mo> <mi mathvariant="sans-serif">θ</mi> </mrow> </mrow> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>2</mn> </msub> </mrow> </msub> <msub> <mrow> <mrow> <mo>=</mo> <mn>0</mn> <mo>°</mo> <mo>,</mo> <mi mathvariant="normal">d</mi> </mrow> </mrow> <mrow> <mrow> <mn>4</mn> </mrow> </mrow> </msub> <mrow> <mo>=</mo> <mn>0.2</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math> (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>1</mn> </msub> </mrow> </msub> <msub> <mrow> <mrow> <mo>=</mo> <mo>±</mo> <mn>90</mn> <mo>°</mo> <mo>,</mo> <mi mathvariant="sans-serif">θ</mi> </mrow> </mrow> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>2</mn> </msub> </mrow> </msub> <msub> <mrow> <mrow> <mo>=</mo> <mn>0</mn> <mo>°</mo> <mo>,</mo> <mi mathvariant="normal">d</mi> </mrow> </mrow> <mn>4</mn> </msub> <mrow> <mo>=</mo> <mn>0.2</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math> (<b>e</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>1</mn> </msub> </mrow> </msub> <msub> <mrow> <mrow> <mo>=</mo> <mn>0</mn> <mo>°</mo> <mo>,</mo> <mi mathvariant="sans-serif">θ</mi> </mrow> </mrow> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>2</mn> </msub> </mrow> </msub> <msub> <mrow> <mrow> <mo>=</mo> <mo>±</mo> <mn>30</mn> <mo>°</mo> <mo>,</mo> <mi mathvariant="normal">d</mi> </mrow> </mrow> <mrow> <mrow> <mn>4</mn> </mrow> </mrow> </msub> <mrow> <mo>=</mo> <mn>0.2</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> <mo>.</mo> </mrow> </semantics></math></p>
Full article ">Figure 7 Cont.
<p>Selective arm anatomies of the metamorphic structure and singularities are displayed in joint space and half cross-section of workspace with design parameters: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>1</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>30</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>2</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>90</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="normal">d</mi> <mn>4</mn> </msub> <mrow> <mo>=</mo> <mn>0.4</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math> (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>1</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>75</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>2</mn> </msub> </mrow> </msub> <mrow> <mo>=</mo> <mo>±</mo> <mn>15</mn> <mo>°</mo> <mo>,</mo> </mrow> <msub> <mi mathvariant="normal">d</mi> <mn>4</mn> </msub> <mrow> <mo>=</mo> <mn>0.4</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math> (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>1</mn> </msub> </mrow> </msub> <msub> <mrow> <mrow> <mo>=</mo> <mn>0</mn> <mo>°</mo> <mo>,</mo> <mi mathvariant="sans-serif">θ</mi> </mrow> </mrow> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>2</mn> </msub> </mrow> </msub> <msub> <mrow> <mrow> <mo>=</mo> <mn>0</mn> <mo>°</mo> <mo>,</mo> <mi mathvariant="normal">d</mi> </mrow> </mrow> <mrow> <mrow> <mn>4</mn> </mrow> </mrow> </msub> <mrow> <mo>=</mo> <mn>0.2</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math> (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>1</mn> </msub> </mrow> </msub> <msub> <mrow> <mrow> <mo>=</mo> <mo>±</mo> <mn>90</mn> <mo>°</mo> <mo>,</mo> <mi mathvariant="sans-serif">θ</mi> </mrow> </mrow> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>2</mn> </msub> </mrow> </msub> <msub> <mrow> <mrow> <mo>=</mo> <mn>0</mn> <mo>°</mo> <mo>,</mo> <mi mathvariant="normal">d</mi> </mrow> </mrow> <mn>4</mn> </msub> <mrow> <mo>=</mo> <mn>0.2</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math> (<b>e</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>1</mn> </msub> </mrow> </msub> <msub> <mrow> <mrow> <mo>=</mo> <mn>0</mn> <mo>°</mo> <mo>,</mo> <mi mathvariant="sans-serif">θ</mi> </mrow> </mrow> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>2</mn> </msub> </mrow> </msub> <msub> <mrow> <mrow> <mo>=</mo> <mo>±</mo> <mn>30</mn> <mo>°</mo> <mo>,</mo> <mi mathvariant="normal">d</mi> </mrow> </mrow> <mrow> <mrow> <mn>4</mn> </mrow> </mrow> </msub> <mrow> <mo>=</mo> <mn>0.2</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> <mo>.</mo> </mrow> </semantics></math></p>
Full article ">Figure 8
<p>Continuous direct and inverse projections-mappings of internal and external singularities in a section of metamorphic workspace with the variation only of <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>1</mn> </msub> </mrow> </msub> </mrow> </semantics></math> on left (<b>a</b>) and of <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">θ</mi> <mrow> <msub> <mi mathvariant="sans-serif">π</mi> <mn>2</mn> </msub> </mrow> </msub> </mrow> </semantics></math> on (<b>b</b>) right.</p>
Full article ">Figure 9
<p>(<b>a</b>) A free of kinematic singularity path joins two inverse kinematic solutions in aspect <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">A</mi> <mn>1</mn> </msub> </mrow> </semantics></math>, (<b>b</b>) perfect cyclic motion of the TCP encircling a cusp point in the workspace of the selected metamorphic anatomy.</p>
Full article ">Figure 10
<p>Joints behavior performing non-singular posture changing trajectory.</p>
Full article ">Figure 11
<p>The determinant of the Jacobian matrix as a function of discrete steps for a perfect circle.</p>
Full article ">Figure 12
<p>(<b>a</b>) A continuous and smooth path for two inverse kinematic solutions without change of posture, (<b>b</b>) Circle is performed in half cross-section of metamorphic workspace encircling a cusp point.</p>
Full article ">Figure 13
<p>Change of angular position when the metamorphic anatomy performs a non-singular posture changing trajectory.</p>
Full article ">Figure 14
<p>The behavior of determinant of Jacobian for non-generic metamorphic anatomy.</p>
Full article ">Figure 15
<p>(<b>a</b>) Smooth curved joint path in aspect <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">A</mi> <mn>1</mn> </msub> </mrow> </semantics></math> and the respective kinematic singularities (<b>b</b>) Rectilinear motion of metamorphic mechanism in half cross-section of the workspace <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mrow> <mrow> <mi mathvariant="sans-serif">ρ</mi> <mo>,</mo> <mi mathvariant="sans-serif">Ζ</mi> </mrow> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> in the region with 2 IKS.</p>
Full article ">Figure 16
<p>Active Joints behavior during the rectilinear motion of end-effector.</p>
Full article ">Figure 17
<p>The continuous function of the determinant of geometric Jacobian for rectilinear trajectory.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop