Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 11, April
Previous Issue
Volume 10, December
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

Robotics, Volume 11, Issue 1 (February 2022) – 29 articles

Cover Story (view full-size image): Bio-inspired solutions are currently investigated as a source of maneuvering improvement for autonomous underwater vehicles. To address this ambitious objective, the authors designed a novel transmission system capable of transforming the constant angular velocity of a single rotary motor into the pitching–yawing motion of fish pectoral fins. Here, the biomimetic thrusters exploit the drag-based momentum transfer mechanism of labriform swimmers to generate a steering torque. Aside from inertia and encumbrance reduction, the main improvement is the inherent synchronization of the system granted by the mechanism kinematics. The proposed solution was sized using the experimental results collected by biologists and then integrated in a multiphysics simulation environment to predict the resulting impressive maneuvering performance. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
24 pages, 10139 KiB  
Article
Evaluation Criteria for Trajectories of Robotic Arms
by Michal Dobiš, Martin Dekan, Peter Beňo, František Duchoň and Andrej Babinec
Robotics 2022, 11(1), 29; https://doi.org/10.3390/robotics11010029 - 15 Feb 2022
Cited by 5 | Viewed by 6380
Abstract
This paper presents a complex trajectory evaluation framework with a high potential for use in many industrial applications. The framework focuses on the evaluation of robotic arm trajectories containing only robot states defined in joint space without any time parametrization (velocities or accelerations). [...] Read more.
This paper presents a complex trajectory evaluation framework with a high potential for use in many industrial applications. The framework focuses on the evaluation of robotic arm trajectories containing only robot states defined in joint space without any time parametrization (velocities or accelerations). The solution presented in this article consists of multiple criteria, mainly based on well-known trajectory metrics. These were slightly modified to allow their application to this type of trajectory. Our framework provides the methodology on how to accurately compare paths generated by randomized-based path planners, with respect to the numerous industrial optimization criteria. Therefore, the selection of the optimal path planner or its configuration for specific applications is much easier. The designed criteria were thoroughly experimentally evaluated using a real industrial robot. The results of these experiments confirmed the correlation between the predicted robot behavior and the behavior of the robot during the trajectory execution. Full article
(This article belongs to the Special Issue Advances in Industrial Robotics and Intelligent Systems)
Show Figures

Figure 1

Figure 1
<p>Kuka KR120 with gripper, used to executed planned paths in a series of criteria-validating experiments.</p>
Full article ">Figure 2
<p>The path between the start state and the goal state.</p>
Full article ">Figure 3
<p>Histograms of differences between planned paths and executed trajectories, compared for the following criteria: (<b>a</b>) joint distance; (<b>b</b>) Cartesian distance; (<b>c</b>) orientation change; (<b>d</b>) robot displacement.</p>
Full article ">Figure 3 Cont.
<p>Histograms of differences between planned paths and executed trajectories, compared for the following criteria: (<b>a</b>) joint distance; (<b>b</b>) Cartesian distance; (<b>c</b>) orientation change; (<b>d</b>) robot displacement.</p>
Full article ">Figure 4
<p>Energy consumption after a cold start of the robot.</p>
Full article ">Figure 5
<p>Illustration of one trajectory, which is executed in experiments 2–5: (<b>a</b>) experiment no. 2 the “longer” trajectory (<b>b</b>) experiment no. 3. the “shorter” trajectory; (<b>c</b>) experiment no. 4 the TCP is further on the <span class="html-italic">x</span>-axis trajectory; (<b>d</b>) experiment no 5. the “vertical” trajectory.</p>
Full article ">Figure 6
<p>The histogram of differences between predicted and measured energy consumption in experiment 1, which has been used to find control cost constants.</p>
Full article ">Figure 7
<p>Histograms of the differences between predicted and measured energy consumption: (<b>a</b>) experiment no. 2; (<b>b</b>) experiment no. 3; (<b>c</b>) experiment no. 4; (<b>d</b>) experiment no. 5.</p>
Full article ">Figure 7 Cont.
<p>Histograms of the differences between predicted and measured energy consumption: (<b>a</b>) experiment no. 2; (<b>b</b>) experiment no. 3; (<b>c</b>) experiment no. 4; (<b>d</b>) experiment no. 5.</p>
Full article ">Figure 8
<p>Boxplot comparing the results from Experiments 1–5.</p>
Full article ">Figure 9
<p>Pseudo jerk calculated from the planned path.</p>
Full article ">Figure 10
<p>Jerk calculated from positions on the executed trajectory.</p>
Full article ">Figure 11
<p>The calculated pseudo-jerk between each waypoint with marked local peaks.</p>
Full article ">Figure 12
<p>The “slowdown” of the robot with marked local peaks.</p>
Full article ">Figure 13
<p>The dependency of the pseudo-jerk and real robot “slowdown”.</p>
Full article ">Figure 14
<p>The histogram of differences between the predicated (function (11) and the curved red line in <a href="#robotics-11-00029-f013" class="html-fig">Figure 13</a>) and the real measured (blue dots on <a href="#robotics-11-00029-f013" class="html-fig">Figure 13</a>) robot “slowdown”.</p>
Full article ">Figure 15
<p>The dependency of the expected Cartesian pseudo-jerk and real robot “slowdown” in Cartesian space of the tool point.</p>
Full article ">Figure 16
<p>The histogram of differences between the predicated (function (12) and the curved red line in <a href="#robotics-11-00029-f015" class="html-fig">Figure 15</a>) and the real measured (blue dots on <a href="#robotics-11-00029-f015" class="html-fig">Figure 15</a>) robot “slowdown”.</p>
Full article ">Figure 17
<p>Illustration of the first experimental setup and robot states: (<b>a</b>) the start state; (<b>b</b>) the goal state.</p>
Full article ">Figure 18
<p>Experimental setups in which extra obstacles are added: (<b>a</b>) the second experiment; (<b>b</b>) the third experiment.</p>
Full article ">Figure 19
<p>Histograms depicting the score for each criterion (blue—no.1, orange—no.2, green—no.3): (<b>a</b>) joint distance; (<b>b</b>) Cartesian distance; (<b>c</b>) robot displacement; (<b>d</b>) orientation change; (<b>e</b>) joint jerk peaks; (<b>f</b>) Cartesian jerk peaks; (<b>g</b>) control pseudo-cost; (<b>h</b>) duration.</p>
Full article ">Figure 19 Cont.
<p>Histograms depicting the score for each criterion (blue—no.1, orange—no.2, green—no.3): (<b>a</b>) joint distance; (<b>b</b>) Cartesian distance; (<b>c</b>) robot displacement; (<b>d</b>) orientation change; (<b>e</b>) joint jerk peaks; (<b>f</b>) Cartesian jerk peaks; (<b>g</b>) control pseudo-cost; (<b>h</b>) duration.</p>
Full article ">Figure 20
<p>Experimental setups: (<b>a</b>) the first setup; (<b>b</b>) the second setup; (<b>c</b>) the third setup.</p>
Full article ">Figure 21
<p>Histograms depicting the score for each criterion (blue—no.1, orange—no.2, green—no.3): (<b>a</b>) joint distance; (<b>b</b>) Cartesian distance; (<b>c</b>) robot displacement; (<b>d</b>) orientation change; (<b>e</b>) joint jerk peaks; (<b>f</b>) Cartesian jerk peaks; (<b>g</b>) control pseudo-cost; (<b>h</b>) duration.</p>
Full article ">Figure 21 Cont.
<p>Histograms depicting the score for each criterion (blue—no.1, orange—no.2, green—no.3): (<b>a</b>) joint distance; (<b>b</b>) Cartesian distance; (<b>c</b>) robot displacement; (<b>d</b>) orientation change; (<b>e</b>) joint jerk peaks; (<b>f</b>) Cartesian jerk peaks; (<b>g</b>) control pseudo-cost; (<b>h</b>) duration.</p>
Full article ">
24 pages, 3770 KiB  
Article
Identifying Personality Dimensions for Engineering Robot Personalities in Significant Quantities with Small User Groups
by Liangyi Luo, Kohei Ogawa and Hiroshi Ishiguro
Robotics 2022, 11(1), 28; https://doi.org/10.3390/robotics11010028 - 14 Feb 2022
Cited by 6 | Viewed by 3895
Abstract
Future service robots mass-produced for practical applications may benefit from having personalities. To engineer robot personalities in significant quantities for practical applications, we need first to identify the personality dimensions on which personality traits can be effectively optimised by minimising the distances between [...] Read more.
Future service robots mass-produced for practical applications may benefit from having personalities. To engineer robot personalities in significant quantities for practical applications, we need first to identify the personality dimensions on which personality traits can be effectively optimised by minimising the distances between engineering targets and the corresponding robots under construction, since not all personality dimensions are applicable and equally prominent. Whether optimisation is possible on a personality dimension depends on how specific users consider the personalities of a type of robot, especially whether they can provide effective feedback to guide the optimisation of certain traits on a personality dimension. The dimensions may vary from user group to user group since not all people consider a type of trait to be relevant to a type of robot, which our results corroborate. Therefore, we had proposed a test procedure as an engineering tool to identify, with the help of a user group, personality dimensions for engineering robot personalities out of a type of robot knowing its typical usage. It applies to robots that can imitate human behaviour and small user groups with at least eight people. We confirmed its effectiveness in limited-scope tests. Full article
(This article belongs to the Special Issue Robotics: 10th Anniversary Feature Papers)
Show Figures

Figure 1

Figure 1
<p>Workflow of the procedure (where the four rows represent the four phases; the arrows mark dependencies; the capsule is the starting point; the rectangles are processes; the cylinders are data sets or materials; and the hexagons are the results of processes). Phase 1 is about recording archetypal behaviour; Phase 2, implementing the behaviour; Phase 3, user assessment of robot personalities; and Phase 4, the three tests.</p>
Full article ">Figure 2
<p>Tested robot in action as what the human observers viewed in Phase 3 (but without pixelating the actor’s face) when they reported the robot personalities.</p>
Full article ">Figure 3
<p>Workflow adapted to our test.</p>
Full article ">Figure 4
<p>Results of the fidelity test (<a href="#sec5dot1-robotics-11-00028" class="html-sec">Section 5.1</a>).</p>
Full article ">Figure 5
<p>Results of the fidelity ‘sanity’ test (<a href="#sec5dot3-robotics-11-00028" class="html-sec">Section 5.3</a>).</p>
Full article ">
16 pages, 7636 KiB  
Article
Modeling and Analysis of a High-Speed Adjustable Grasping Robot Controlled by a Pneumatic Actuator
by Kenichi Ohara, Ryosuke Iwazawa and Makoto Kaneko
Robotics 2022, 11(1), 27; https://doi.org/10.3390/robotics11010027 - 12 Feb 2022
Cited by 6 | Viewed by 3579
Abstract
This paper discusses the modeling and analysis of a high-speed adjustable grasping robot controlled by a pneumatic actuator. The robot is composed of two grippers, two wires for connecting a pneumatic cylinder and an arm with gripper joints with a spring as well [...] Read more.
This paper discusses the modeling and analysis of a high-speed adjustable grasping robot controlled by a pneumatic actuator. The robot is composed of two grippers, two wires for connecting a pneumatic cylinder and an arm with gripper joints with a spring as well as two stoppers for controlling the gripper stopping point with a brake. By supplying pressurized air into the pneumatic cylinder, the two grippers move forward together with the arm and capture the object by adjusting the air pressure in the cylinder. After capturing the target object, the system can release the object by changing the pressure air in the air cylinder using another port. By considering the state equation of the air, we obtain a dynamic model of the robot, including the actuator. Through numerical simulation, we show that the simulation results can explain the experimental results from the developed robot system. Through our experiments, we confirm that the developed high-speed grasping robot can grasp continuously moving objects with a gap of ±15 mm at 300 mm/s. Full article
(This article belongs to the Section Sensors and Control in Robotics)
Show Figures

Figure 1

Figure 1
<p>Capturing patterns of the developed high-speed grasping robot. (<b>a</b>) Successful grasping. (<b>b</b>) Grasp failure due to the collision problem. (<b>c</b>) Grasp failure due to the geometric problem.</p>
Full article ">Figure 2
<p>Design concept of a high-speed grasping robot with the braking mechanism.</p>
Full article ">Figure 3
<p>Modeling of the driving part.</p>
Full article ">Figure 4
<p>Modeling of the gripper part.</p>
Full article ">Figure 5
<p>Specifications of the high-speed grasping robot.</p>
Full article ">Figure 6
<p>Experimental environment for verification of the grasping time.</p>
Full article ">Figure 7
<p>Simulation results when the sonic conductance is increased: (<b>a</b>) the simulation results of the velocity, (<b>b</b>) the simulation results of the movement and (<b>c</b>) the model where the gripper part is removed.</p>
Full article ">Figure 8
<p>Comparison of the simulation results and experimental results: (<b>a</b>) the simulation and experimental results of the movement and (<b>b</b>) the model where the gripper part is included.</p>
Full article ">Figure 9
<p>The response characteristics of the robot approach time.</p>
Full article ">Figure 10
<p>Experimental results for the robot approach time.</p>
Full article ">Figure 11
<p>The response characteristics of the robot approach and release times.</p>
Full article ">Figure 12
<p>Continuous photos: (<b>a</b>) grasping motion and (<b>b</b>) brake action when the arm movement amount is 45 mm.</p>
Full article ">Figure 13
<p>Experimental setup of the continuous grasping of moving objects: (<b>a</b>) system configuration and (<b>b</b>) target objects (200 mL drink pack). <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>o</mi> <mi>b</mi> <mi>j</mi> <mi>e</mi> <mi>c</mi> <mi>t</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>l</mi> <mi>t</mi> </mrow> </msub> </semantics></math> are the velocity of the object and the velocity of the belt conveyor, respectively. Experiments were performed with a <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>l</mi> <mi>t</mi> </mrow> </msub> </semantics></math> speed of 300 mm/s.</p>
Full article ">Figure 14
<p>An example of the experimental results of grasping and laying down moving objects with misalignment: (<b>a</b>) grasping motion and (<b>b</b>) laying down motion.</p>
Full article ">Figure 15
<p>Experimental results of grasping moving objects: (<b>a</b>) 200 mL drink pack and (<b>b</b>) 330 mL pet bottles. The definition of the success is when the robot system can successfully grasp the object.</p>
Full article ">
20 pages, 2306 KiB  
Article
Six-Bar Linkage Models of a Recumbent Tricycle Mechanism to Increase Power Throughput in FES Cycling
by Nicholas A. Lanese, David H. Myszka, Anthony L. Bazler and Andrew P. Murray
Robotics 2022, 11(1), 26; https://doi.org/10.3390/robotics11010026 - 11 Feb 2022
Cited by 2 | Viewed by 4273
Abstract
This paper presents the kinematic and static analysis of two mechanisms to improve power throughput for persons with tetra- or paraplegia pedaling a performance tricycle via FES. FES, or functional electrical stimulation, activates muscles by passing small electrical currents through the muscle creating [...] Read more.
This paper presents the kinematic and static analysis of two mechanisms to improve power throughput for persons with tetra- or paraplegia pedaling a performance tricycle via FES. FES, or functional electrical stimulation, activates muscles by passing small electrical currents through the muscle creating a contraction. The use of FES can build muscle in patients, relieve soreness, and promote cardiovascular health. Compared to an able-bodied rider, a cyclist stimulated via FES produces an order of magnitude less power creating some notable pedaling difficulties especially pertaining to inactive zones. An inactive zone occurs when the leg position is unable to produce enough power to propel the tricycle via muscle stimulation. An inactive zone is typically present when one leg is fully bent and the other leg is fully extended. Altering the motion of a cyclist’s legs relative to the crank position can potentially reduce inactive zones and increase power throughput. Some recently marketed bicycles showcase pedal mechanisms utilizing alternate leg motions. This work considers performance tricycle designs based on the Stephenson III and Watt II six-bar mechanisms where the legs define two of the system’s links. The architecture based on the Stephenson III is referred to throughout as the CDT due to the legs’ push acting to coupler-drive the four-bar component of the system. The architecture based on the Watt II is referred to throughout as the CRT due to the legs’ push acting to drive the rocker link of the four-bar component of the system. The unmodified or traditional recumbent tricycle (TRT) provides a benchmarks by which the designs proposed herein may be evaluated. Using knee and hip torques and angular velocities consistent with a previous study, this numerical study using a quasi-static power model of the CRT suggests a roughly 50% increase and the CDT suggests roughly a doubling in average crank power, respectively, for a typical FES cyclist. Full article
(This article belongs to the Special Issue Kinematics and Robot Design IV, KaRD2021)
Show Figures

Figure 1

Figure 1
<p>Knee and hip moment data reported in Szecsi P1P2 and P1P3 groups, adapted from Ref. [<a href="#B21-robotics-11-00026" class="html-bibr">21</a>], 2014 Clarivate Analytics Web of Science, shown as the black curve. The numerical approximations are shown in blue. The curves are plotted from an absolute crank angle of 22<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math> to correspond with Szecsi’s data, which is measured from top dead center.</p>
Full article ">Figure 2
<p>Kinematic model for the TRT mechanism, adapted from Ref. [<a href="#B23-robotics-11-00026" class="html-bibr">23</a>], 2014 Elsevier Ltd.</p>
Full article ">Figure 3
<p>Free-body diagrams for the TRT force model.</p>
Full article ">Figure 4
<p>The color curves represent the power calculated by the TRT model at the crank center attributed to the knee (cyan), hip (magenta) and total (blue). The calculated data overlays the black curves that represent the power reported by Szecsi.</p>
Full article ">Figure 5
<p>Joint moments as a function of relative joint angles.</p>
Full article ">Figure 6
<p>Kinematic model for the CDT mechanism, adapted from Ref. [<a href="#B23-robotics-11-00026" class="html-bibr">23</a>], 2014 Elsevier Ltd.</p>
Full article ">Figure 7
<p>Free-body diagrams for the CDT mechanism.</p>
Full article ">Figure 8
<p>Kinematic model for the CRT mechanism adapted from Ref. [<a href="#B23-robotics-11-00026" class="html-bibr">23</a>], 2014 Elsevier Ltd.</p>
Full article ">Figure 9
<p>Free-body diagrams for the CRT mechanism.</p>
Full article ">Figure 10
<p>The design space shown in green bounds the region of usable crank center and rocker arm pivot locations for both the CRT and CDT designs, in units of meters.</p>
Full article ">Figure 11
<p>Joint torque curves for CRT design R2 and CDT design D1. Note that the axes limits are kept the same as <a href="#robotics-11-00026-f005" class="html-fig">Figure 5</a> for comparative purposes.</p>
Full article ">Figure 12
<p>CRT instantaneous power curve at crank center for design R2 and CDT instantaneous power curve at crank center for design D1. Plot (<b>a</b>) shows the CRT design R2 instantaneous power curve for the P1P2 group. Plot (<b>b</b>) shows the CRT design R2 instantaneous power curve for the P1P3 group. Plot (<b>c</b>) shows the CDT design D1 instantaneous power curve for the P1P2 group. Plot (<b>d</b>) shows the CDT design D1 instantaneous power curve for the P1P3 group.</p>
Full article ">Figure 13
<p>A scaled kinematic sketch of CDT Design D1, in meters.</p>
Full article ">
21 pages, 8040 KiB  
Article
Implementation of a Flexible and Lightweight Depth-Based Visual Servoing Solution for Feature Detection and Tracing of Large, Spatially-Varying Manufacturing Workpieces
by Lee Clift, Divya Tiwari, Chris Scraggs, Windo Hutabarat, Lloyd Tinkler, Jonathan M. Aitken and Ashutosh Tiwari
Robotics 2022, 11(1), 25; https://doi.org/10.3390/robotics11010025 - 11 Feb 2022
Viewed by 3507
Abstract
This work proposes a novel solution for detecting and tracing spatially varying edges of large manufacturing workpieces, using a consumer grade RGB depth camera, with only a partial view of the workpiece and without prior knowledge. The proposed system can visually detect and [...] Read more.
This work proposes a novel solution for detecting and tracing spatially varying edges of large manufacturing workpieces, using a consumer grade RGB depth camera, with only a partial view of the workpiece and without prior knowledge. The proposed system can visually detect and trace various edges, with a wide array of degrees, to an accuracy of 15 mm or less, without the need for any previous information, setup or planning. A combination of physical experiments on the setup and more complex simulated experiments were conducted. The effectiveness of the system is demonstrated via simulated and physical experiments carried out on both acute and obtuse edges, as well as typical aerospace structures, made from a variety of materials, with dimensions ranging from 400 mm to 600 mm. Simulated results show that, with artificial noise added, the solution presented can detect aerospace structures to an accuracy of 40 mm or less, depending on the amount of noise present, while physical aerospace inspired structures can be traced with a consistent accuracy of 5 mm regardless of the cardinal direction. Compared to current industrial solutions, the lack of required planning and robustness of edge detection means it should be able to complete tasks more quickly and easily than the current standard, with a lower financial and computational cost than the current techniques being used within. Full article
(This article belongs to the Special Issue Industrial Robotics in Industry 4.0)
Show Figures

Figure 1

Figure 1
<p>A CAD image of an example aerostructure needing to be traced (highlighted).</p>
Full article ">Figure 2
<p>An overview of the research presented.</p>
Full article ">Figure 3
<p>The workflow used for detecting and moving towards edges with the Mover6.</p>
Full article ">Figure 4
<p>The workflow of detecting the point at which a flat plane and curved edge meet.</p>
Full article ">Figure 5
<p>The setup for simulated obtuse edge detection.</p>
Full article ">Figure 6
<p>The workflow used for detecting the dependency values of different environmental variables.</p>
Full article ">Figure 7
<p>The lab setup for the environmental parametre experiments. (<b>a</b>) A photo of the environmental setup, showing the environmental parametre experiments. (<b>b</b>) A photo of the kinematic arm holding an Intel Realsense camera.</p>
Full article ">Figure 8
<p>The tracing experimental setup with a hollow cardboard workpiece measuring 500 mm by 350 mm by 350 mm.</p>
Full article ">Figure 9
<p>The tracing experimental setup with an aerospace inspired plywood workpiece measuring 600 mm by 170 mm by 3 mm.</p>
Full article ">Figure 10
<p>The tracing experimental setup with an aerospace inspired concave Perspex workpiece measuring 400 mm by 200 mm by 3 mm.</p>
Full article ">Figure 11
<p>The tracing experimental setup with a curved MDF workpiece measuring 1220 mm by 607 mm by 6 mm.</p>
Full article ">Figure 12
<p>A simulated setup for the camera to detect edges of the workpiece. (<b>top left</b>) The Gazebo Simulation. (<b>top right</b>) The Canny Edge Output. (<b>bottom left</b>) The pixel detection output. (<b>bottom right</b>) The raw camera output.</p>
Full article ">Figure 13
<p>The workpiece detection heatmaps, displaying disparity between detected results to a known ground truth.</p>
Full article ">Figure 14
<p>A bar chart showing the euclidean distance between the ground truth and each experiment.</p>
Full article ">Figure 15
<p>A Box Plot showing the relationships between the amount of noise and the absolute euclidean error.</p>
Full article ">Figure 16
<p>A scatter graph showing all detected points for each experiment.</p>
Full article ">Figure 17
<p>Detected points of a curved workpiece edge at a 1 m distance.</p>
Full article ">Figure 18
<p>Detected points of a curved workpiece edge at a 0.75 m distance.</p>
Full article ">Figure 19
<p>Detected points of a curved workpiece edge at 30<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math>, at a 1 m distance.</p>
Full article ">Figure 20
<p>A scatter graph showing the local coordinate positions of the four test cases.</p>
Full article ">Figure 21
<p>Detected points of an aerospace inspired workpiece.</p>
Full article ">Figure 22
<p>Detected points of an aerospace inspired concave workpiece.</p>
Full article ">Figure 23
<p>Detected points of an aerospace inspired curved workpiece.</p>
Full article ">
27 pages, 1737 KiB  
Review
A Comprehensive Survey of Visual SLAM Algorithms
by Andréa Macario Barros, Maugan Michel, Yoann Moline, Gwenolé Corre and Frédérick Carrel
Robotics 2022, 11(1), 24; https://doi.org/10.3390/robotics11010024 - 10 Feb 2022
Cited by 242 | Viewed by 37095
Abstract
Simultaneous localization and mapping (SLAM) techniques are widely researched, since they allow the simultaneous creation of a map and the sensors’ pose estimation in an unknown environment. Visual-based SLAM techniques play a significant role in this field, as they are based on a [...] Read more.
Simultaneous localization and mapping (SLAM) techniques are widely researched, since they allow the simultaneous creation of a map and the sensors’ pose estimation in an unknown environment. Visual-based SLAM techniques play a significant role in this field, as they are based on a low-cost and small sensor system, which guarantees those advantages compared to other sensor-based SLAM techniques. The literature presents different approaches and methods to implement visual-based SLAM systems. Among this variety of publications, a beginner in this domain may find problems with identifying and analyzing the main algorithms and selecting the most appropriate one according to his or her project constraints. Therefore, we present the three main visual-based SLAM approaches (visual-only, visual-inertial, and RGB-D SLAM), providing a review of the main algorithms of each approach through diagrams and flowcharts, and highlighting the main advantages and disadvantages of each technique. Furthermore, we propose six criteria that ease the SLAM algorithm’s analysis and consider both the software and hardware levels. In addition, we present some major issues and future directions on visual-SLAM field, and provide a general overview of some of the existing benchmark datasets. This work aims to be the first step for those initiating a SLAM project to have a good perspective of SLAM techniques’ main elements and characteristics. Full article
(This article belongs to the Topic Motion Planning and Control for Robotics)
Show Figures

Figure 1

Figure 1
<p>General components of a visual-based SLAM. The depth and inertial data may be added to the 2D visual input to generate a sparse map (generated with the ORB-SLAM3 algorithm [<a href="#B22-robotics-11-00024" class="html-bibr">22</a>] in the MH_01 sequence [<a href="#B23-robotics-11-00024" class="html-bibr">23</a>]), semi-dense map (obtained with the LSD-SLAM [<a href="#B24-robotics-11-00024" class="html-bibr">24</a>] in the dataset provided by the authors), and a dense reconstruction (Reprinted from [<a href="#B25-robotics-11-00024" class="html-bibr">25</a>]).</p>
Full article ">Figure 2
<p>General differences between feature-based and direct methods. Top: main steps followed by the feature-based methods, resulting in a sparse reconstruction (map generated with the ORB-SLAM3 algorithm [<a href="#B22-robotics-11-00024" class="html-bibr">22</a>] in the MH_01/EuRoC sequence [<a href="#B23-robotics-11-00024" class="html-bibr">23</a>]). Bottom: main steps followed by a direct method, that may result in a sparse (generated from the reconstruction of <span class="html-italic">sequence_02</span>/TUM MonoVO [<a href="#B30-robotics-11-00024" class="html-bibr">30</a>] with the DSO algorithm [<a href="#B31-robotics-11-00024" class="html-bibr">31</a>]) or dense reconstruction (Reprinted from [<a href="#B25-robotics-11-00024" class="html-bibr">25</a>]), according to the chosen technique.</p>
Full article ">Figure 3
<p>Timeline representing the most representative visual-only SLAM algorithms.</p>
Full article ">Figure 4
<p>Diagram representing the MonoSLAM algorithm.</p>
Full article ">Figure 5
<p>Diagram representing the PTAM algorithm.</p>
Full article ">Figure 6
<p>Diagram representing the DTAM algorithm.</p>
Full article ">Figure 7
<p>Diagram representing the SVO algorithm. Adapted from [<a href="#B40-robotics-11-00024" class="html-bibr">40</a>].</p>
Full article ">Figure 8
<p>Diagram representing the LSD-SLAM algorithm. Adapted from [<a href="#B24-robotics-11-00024" class="html-bibr">24</a>].</p>
Full article ">Figure 9
<p>Diagram representing the ORB-SLAM 2.0 algorithm. Adapted from [<a href="#B41-robotics-11-00024" class="html-bibr">41</a>].</p>
Full article ">Figure 10
<p>Diagram representing the CNN-SLAM algorithm. Adapted from [<a href="#B48-robotics-11-00024" class="html-bibr">48</a>].</p>
Full article ">Figure 11
<p>Diagram representing the DSO algorithm.</p>
Full article ">Figure 12
<p>Timeline representing the most representative visual-inertial SLAM algorithms.</p>
Full article ">Figure 13
<p>Diagram representing the MSCKF algorithm.</p>
Full article ">Figure 14
<p>Diagram representing the OKVIS algorithm.</p>
Full article ">Figure 15
<p>Diagram representing the feature handling performed by the ROVIO algorithm. Adapted from [<a href="#B69-robotics-11-00024" class="html-bibr">69</a>].</p>
Full article ">Figure 16
<p>Diagram representing the VIORB algorithm.</p>
Full article ">Figure 17
<p>Diagram representing the VINS-Mono algorithm.</p>
Full article ">Figure 18
<p>Diagram representing the VI-DSO algorithm.</p>
Full article ">Figure 19
<p>Timeline representing the most representative RGB-D-based SLAM algorithms.</p>
Full article ">Figure 20
<p>Diagram representing the KinectFusion algorithm. Adapted from [<a href="#B90-robotics-11-00024" class="html-bibr">90</a>].</p>
Full article ">Figure 21
<p>Diagram representing the SLAM++ algorithm. Adapted from [<a href="#B94-robotics-11-00024" class="html-bibr">94</a>].</p>
Full article ">Figure 22
<p>Diagram representing the DVO algorithm.</p>
Full article ">Figure 23
<p>Diagram representing the RGBDSLAMv2 algorithm. Adapted from [<a href="#B96-robotics-11-00024" class="html-bibr">96</a>].</p>
Full article ">
15 pages, 10334 KiB  
Article
A New Hyperloop Transportation System: Design and Practical Integration
by Mohammad Bhuiya, Md Mohiminul Aziz, Fariha Mursheda, Ryan Lum, Navjeet Brar and Mohamed Youssef
Robotics 2022, 11(1), 23; https://doi.org/10.3390/robotics11010023 - 8 Feb 2022
Cited by 6 | Viewed by 6538
Abstract
This paper introduces a new Hyperloop transportation system’s design and implementation. The main contribution of this paper is the design and integration of propulsion components for a linear motion system, with battery storage. The proposed Hyperloop design provides a high-speed transportation means for [...] Read more.
This paper introduces a new Hyperloop transportation system’s design and implementation. The main contribution of this paper is the design and integration of propulsion components for a linear motion system, with battery storage. The proposed Hyperloop design provides a high-speed transportation means for passengers and freights by utilizing linear synchronous motors. In this study, a three-phase inverter was designed and simulated using PSIM. A prototype of this design was built and integrated with a linear synchronous motor. The operation of full system integration satisfies a proof-of-concept design. A study of the inverter system in conjunction with a linear synchronous motor for a ridged Hyperloop system is made. The prototype of this system achieves propulsion for the bidirectional movements. Battery state of charge simulation results are given in a typical motoring and braking scenario. Full article
(This article belongs to the Special Issue Advances in Industrial Robotics and Intelligent Systems)
Show Figures

Figure 1

Figure 1
<p>Block diagram for LSM, adapted from [<a href="#B25-robotics-11-00023" class="html-bibr">25</a>].</p>
Full article ">Figure 2
<p>Block diagram for FOC control scheme.</p>
Full article ">Figure 3
<p>Bidirectional DC-DC Converter Control System.</p>
Full article ">Figure 4
<p>Circuit setup for FOC of PMSM.</p>
Full article ">Figure 5
<p>Three-phase AC line currents (amperage vs. time).</p>
Full article ">Figure 6
<p>Speed response (rads/s vs. time).</p>
Full article ">Figure 7
<p>Battery current for regenerative breaking (amperage vs. time).</p>
Full article ">Figure 8
<p>SOC of the battery.</p>
Full article ">Figure 9
<p>Single Coil Setup in ANSYS Maxwell.</p>
Full article ">Figure 10
<p>Four coil simulations.</p>
Full article ">Figure 11
<p>Magnetic field strength across Z distance.</p>
Full article ">Figure 12
<p>Three-phase inverter prototype.</p>
Full article ">Figure 13
<p>Line to line voltage waveform (Vab); before the dv/dt filter of the motor.</p>
Full article ">Figure 14
<p>Prototype inverter circuit waveforms.</p>
Full article ">Figure 15
<p>Prototype linear motor.</p>
Full article ">Figure 16
<p>Prototype linear motor with test pod.</p>
Full article ">
15 pages, 8262 KiB  
Article
Herbicide Ballistic Technology for Unmanned Aircraft Systems
by Roberto Rodriguez, James J. K. Leary and Daniel M. Jenkins
Robotics 2022, 11(1), 22; https://doi.org/10.3390/robotics11010022 - 3 Feb 2022
Cited by 3 | Viewed by 3535
Abstract
Miconia is a highly invasive plant species with incipient plants occupying remote areas of Hawaiian watersheds. Management of these incipient plants is integral to current containment strategies. Herbicide Ballistic Technology (HBT) has been used for 8 years from helicopters as a precision approach [...] Read more.
Miconia is a highly invasive plant species with incipient plants occupying remote areas of Hawaiian watersheds. Management of these incipient plants is integral to current containment strategies. Herbicide Ballistic Technology (HBT) has been used for 8 years from helicopters as a precision approach to target individual plants. We have developed a prototype HBT applicator integrated onto an unmanned aircraft system, HBT-UAS, which offers the same precision approach with a semi-automated flight plan. Inclusion of the HBT payload resulted in statistically significant deviations from programmed flight plans compared to the unencumbered UAS, but the effect size was lower than that observed for different stages of flight. The additional payload of the HBT-UAS resulted in a large reduction in available flight time resulting a limited range of 22 m. The projectile spread of the HBT-UAS, within a 2–10 m range, had a maximum CEP of 1.87–5.58 cm. The most substantial limitation of the current prototype HBT-UAS is the available flight time. The use of larger capacity UAS and potential for beyond visual line of sight operations would result in a substantial improvement in the serviceable area and utility of the HBT-UAS for containment of invasive plants. Full article
(This article belongs to the Section Agricultural and Field Robotics)
Show Figures

Figure 1

Figure 1
<p>Map of detected miconia plants on the island of Maui. Approximately 10% of detected miconia occur in voluntary avoidance areas [<a href="#B20-robotics-11-00022" class="html-bibr">20</a>].</p>
Full article ">Figure 2
<p>Four phases of HBT-UAS operation: (<b>1</b>) launch, (<b>2</b>) target acquisition, (<b>3</b>) treatment, and (<b>4</b>) return to launch site. Active personnel during each phase are colored green.</p>
Full article ">Figure 3
<p>View from first person view video feed of HBT-UAS.</p>
Full article ">Figure 4
<p>(<b>A</b>) Gimbal marker system with major components labeled. (<b>B</b>) HBT-UAS in flight.</p>
Full article ">Figure 5
<p>Schematic of properties and forces of projectile propulsion in the barrel of the HBT-UAS.</p>
Full article ">Figure 6
<p>Forces acting on projectile after leaving the barrel. Lift is caused by the “Magnus” effect of the projectile spinning through the viscous fluid.</p>
Full article ">Figure 7
<p>(<b>A</b>) Simulation of velocity of projectile as it travels through the barrel results in predicted exit velocity of 81.76 m/s (white circle) from the barrel of the custom HBT marker. (<b>B</b>) Simulation of the trajectory of the projectile after it leaves a horizontally aligned barrel with positioned locations of targets relative to the marker barrel axis (vertical position 0) during experimental validation indicated by white circles.</p>
Full article ">Figure 8
<p>Planned flight path to evaluate flight stability and battery draw for validation flights.</p>
Full article ">Figure 9
<p>(<b>A</b>) Experimental setup of target consisting of kraft paper pulled taut. (<b>B</b>) Exit holes in the Kraft paper are mapped using ImageJ with a tape measure used to calibrate the pixel size.</p>
Full article ">Figure 10
<p>(<b>A</b>) Horizontal deviation and (<b>B</b>) 3D deviation from planned flight path for control (red) and payload equipped (blue) aircraft. Results of pairwise <span class="html-italic">t</span>-tests of the means are shown for each phase of flight.</p>
Full article ">Figure 11
<p>Battery capacity for the programmed flight. Control flight is without payload. Note the large draw during the initial climb of the HBT equipped flight.</p>
Full article ">Figure 12
<p>CEP relative to aiming point (0,0) with distance from target indicated by color (2 m, blue; 4 m, red; 6 m, green; 8 m, purple; 10 m, yellow).</p>
Full article ">Figure 13
<p>Positions of projectile impacts (black circles) and local regression (black line) with standard error of the mean (gray) deviate from the positions initially predicted by the simulation (dashed line).</p>
Full article ">Figure 14
<p>Miconia points (purple circles) and voluntary avoidance area (yellow hash marks) within three buffers of available roads (40 m, red; 150 m, orange; 500 m, yellow).</p>
Full article ">
4 pages, 219 KiB  
Editorial
Acknowledgment to Reviewers of Robotics in 2021
by Robotics Editorial Office
Robotics 2022, 11(1), 21; https://doi.org/10.3390/robotics11010021 - 29 Jan 2022
Viewed by 2197
Abstract
Rigorous peer-reviews are the basis of high-quality academic publishing [...] Full article
18 pages, 4896 KiB  
Article
Model-Based Mid-Level Regulation for Assist-As-Needed Hierarchical Control of Wearable Robots: A Computational Study of Human-Robot Adaptation
by Ali Nasr, Arash Hashemi and John McPhee
Robotics 2022, 11(1), 20; https://doi.org/10.3390/robotics11010020 - 29 Jan 2022
Cited by 15 | Viewed by 4936
Abstract
The closed-loop human–robot system requires developing an effective robotic controller that considers models of both the human and the robot, as well as human adaptation to the robot. This paper develops a mid-level controller providing assist-as-needed (AAN) policies in a hierarchical control setting [...] Read more.
The closed-loop human–robot system requires developing an effective robotic controller that considers models of both the human and the robot, as well as human adaptation to the robot. This paper develops a mid-level controller providing assist-as-needed (AAN) policies in a hierarchical control setting using two novel methods: model-based and fuzzy logic rule. The goal of AAN is to provide the required extra torque because of the robot’s dynamics and external load compared to the human limb free movement. The human–robot adaptation is simulated using a nonlinear model predictive controller (NMPC) as the human central nervous system (CNS) for three conditions of initial (the initial session of wearing the robot, without any previous experience), short-term (the entire first session, e.g., 45 min), and long-term experiences. The results showed that the two methods (model-based and fuzzy logic) outperform the traditional proportional method in providing AAN by considering distinctive human and robot models. Additionally, the CNS actuator model has difficulty in the initial experience and activates both antagonist and agonist muscles to reduce movement oscillations. In the long-term experience, the simulation shows no oscillation when the CNS NMPC learns the robot model and modifies its weights to simulate realistic human behavior. We found that the desired strength of the robot should be increased gradually to ignore unexpected human–robot interactions (e.g., robot vibration, human spasticity). The proposed mid-level controllers can be used for wearable assistive devices, exoskeletons, and rehabilitation robots. Full article
(This article belongs to the Section Medical Robotics and Service Robotics)
Show Figures

Figure 1

Figure 1
<p>Schematic of human–robot–environment system with hierarchical strategy control. <math display="inline"><semantics> <msub> <mover accent="true"> <mi>τ</mi> <mo>^</mo> </mover> <mi>h</mi> </msub> </semantics></math> is estimated human joint torque and <math display="inline"><semantics> <msub> <mover accent="true"> <mi>θ</mi> <mo>^</mo> </mover> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </semantics></math> is predicted/desired joint angle computed by the high-level controller.</p>
Full article ">Figure 2
<p>Schematic of proportional rule (<b>A</b>), model-based rule (<b>B</b>), and fuzzy-logic rule (<b>C</b>) as mid-level controller.</p>
Full article ">Figure 3
<p>The membership function and degree of membership for the two models with three inputs (<b>top</b>) and two outputs (<b>middle</b>). The surface function plots of the outputs of the models for all input variables (<b>bottom</b>).</p>
Full article ">Figure 4
<p>Schematic of the hierarchical control. The external input signals are kinematic feedback, activation or sEMG signal, and kinetic feedback. The mid-level controller coordinates the desired assistive torque based on the desired strength.</p>
Full article ">Figure 5
<p>Schematic of the human–robot multibody system. The CNS is represented by the NMPC controller. The MTG model relates the torque to joint angle and angular speed. The InverseMuscleNET block estimates the sEMG signals.</p>
Full article ">Figure 6
<p>The motion capture system markers (Vicon Motion Systems Ltd, UK) and wireless EMG sensors (Delsys Inc, MA, USA) are attached to the subject who is wearing an inactive EVO exoskeleton (Ekso Bionics Holdings Inc., CA, USA) (<b>top</b>); The sagittal pick and place task with active shoulder and elbow joint (<b>A</b>). The desired trajectory of the shoulder and elbow joint angles (<b>B</b>). The external vertical load of the object at the palm of the hand (<b>C</b>). The required activation torque of the shoulder joint with and without an inactive robot using inverse dynamic simulation (<b>D</b>).</p>
Full article ">Figure 7
<p>The 1st scenario (full strength) plot of the shoulder actuation torques (solid line), the desired shoulder actuation torque (dash line), the extra assistance (red area), and insufficient torque (blue area). Shown are the three mid-level optimized controllers of the proportional (<b>top-row</b>), model-based (<b>middle-row</b>), fuzzy-logic rule (<b>bottom-rule</b>) for three different conditions of initial (<b>left column</b>), short-term (<b>middle column</b>), long-term experience (<b>right column</b>) of wearing a powered robot.</p>
Full article ">Figure 8
<p>The integral of absolute, negative, and positive actuation error for proportional, model-based, and fuzzy-logic controller (<b>left</b>); The integral of absolute velocity error for three CNS conditions of initial, short-term, and long-term experience (<b>middle</b>); The integral of absolute tracking error for three CNS conditions of initial, short-term, and long-term experience (<b>right</b>).</p>
Full article ">Figure 9
<p>The 2nd scenario (variable strength) plot of the shoulder actuation torques (solid line), the desired shoulder actuation torque (dash line), the extra assistance (red area), and insufficient torque (blue area). Shown are the three different conditions of initial (<b>left column</b>) with 30% strength, short-term (<b>middle column</b>) with 50% strength, long-term experience (<b>right column</b>) with 100% strength.</p>
Full article ">Figure 10
<p>The desired robot torque is computed by the simulated inverse dynamic model (dashed), as well as the robot torque for proportional (red), model-based (black), and fuzzy-logic (blue) mid-level controllers. The first scenario with full strength and optimized weights for the short-term experience shows at top-row, and the second scenario with varying strength of 30% for initial experience at the left side, 50% for short-term experience at the middle side, and 100% for the long-term experience at the right side.</p>
Full article ">
17 pages, 7761 KiB  
Article
Investigation of the Mounting Position of a Wearable Robot Arm
by Akimichi Kojima, Dinh Tuan Tran and Joo-Ho Lee
Robotics 2022, 11(1), 19; https://doi.org/10.3390/robotics11010019 - 29 Jan 2022
Cited by 4 | Viewed by 4545
Abstract
In a wearable robot arm, the minimum joint configuration and link length must be considered to avoid increasing the burden on the user. This work investigated how the joint configuration, length of arm links, and mounting position of a wearable robot arm affect [...] Read more.
In a wearable robot arm, the minimum joint configuration and link length must be considered to avoid increasing the burden on the user. This work investigated how the joint configuration, length of arm links, and mounting position of a wearable robot arm affect the cooperative and invasive workspaces of the overall workspace. We considered the joint configurations and link lengths of passive and active joints in our proposed wearable robot arm, which is called the Assist Oriented Arm (AOA). In addition, we comprehensively studied the position of the arm on the user. As a result, three locations around the shoulders and two around the waist were chosen as potential mounting sites. Furthermore, we evaluated the weight burden when the user mounted the wearable robot arm at those positions. Full article
(This article belongs to the Section Intelligent Robots and Mechatronics)
Show Figures

Figure 1

Figure 1
<p>Configuration of previous wearable robot arm.</p>
Full article ">Figure 2
<p>Definitions of each workspace according to [<a href="#B20-robotics-11-00019" class="html-bibr">20</a>]: (a) extensive workspace, (b) cooperative workspace, (c) invasive workspace.</p>
Full article ">Figure 3
<p>Definitions of the user’s workspace and body dimensions: (<b>a</b>) Division of the user’s main workspace, (<b>b</b>) Average workspace range of a person according to [<a href="#B21-robotics-11-00019" class="html-bibr">21</a>], (<b>c</b>) Human dimensions according to [<a href="#B22-robotics-11-00019" class="html-bibr">22</a>].</p>
Full article ">Figure 4
<p>Definition of the joint configurations.</p>
Full article ">Figure 5
<p>Denavit-Hartenberg represenation.</p>
Full article ">Figure 6
<p>Effect of passive-joint link length on the range of motion when the link length is (<b>a</b>) greater than the main workspace, (<b>b</b>) much smaller than the main workspace, and (<b>c</b>) similar to the maximum main workspace.</p>
Full article ">Figure 7
<p>Maximum range of motion for each passive joint configuration.</p>
Full article ">Figure 8
<p>Differences in the range of motion of passive joint configurations and link lengths at the high, middle, and low positions.</p>
Full article ">Figure 9
<p>Joint configuration of the robot arm.</p>
Full article ">Figure 10
<p>Range of motion due to the difference in the link lengths of the active joints.</p>
Full article ">Figure 11
<p>Percentage of cooperative workspace in the main workspace and invasive workspace in middle range.</p>
Full article ">Figure 12
<p>Candidates of mounting positions.</p>
Full article ">Figure 13
<p>Stress verification at each mounting position.</p>
Full article ">Figure 14
<p>Wearing AOA on the shoulder and waist. The AOA was implemented based on the dimensions specified in this study.</p>
Full article ">Figure 15
<p>Results of the user burden experiment.</p>
Full article ">Figure 16
<p>Applications of the wearable robot arm.</p>
Full article ">
14 pages, 29924 KiB  
Article
Screwdriving Gripper That Mimics Human Two-Handed Assembly Tasks
by Sangchul Han, Myoung-Su Choi, Yong-Woo Shin, Ga-Ram Jang, Dong-Hyuk Lee, Jungsan Cho, Jae-Han Park and Ji-Hun Bae
Robotics 2022, 11(1), 18; https://doi.org/10.3390/robotics11010018 - 27 Jan 2022
Cited by 3 | Viewed by 5400
Abstract
Conventional assembly methods using robots need to change end-effectors or operate two robot arms for assembly. In this study, we propose a screwdriving gripper that can perform the tasks required for the assembly using a single robot arm. The proposed screwdriving gripper mimics [...] Read more.
Conventional assembly methods using robots need to change end-effectors or operate two robot arms for assembly. In this study, we propose a screwdriving gripper that can perform the tasks required for the assembly using a single robot arm. The proposed screwdriving gripper mimics a human-two-handed operation and has three features: (1) it performs pick-and-place, peg-in-hole, and screwdriving tasks required for assembly with a single gripper; (2) it uses a flexible link that complies with the contact force in the environment; and (3) it employs the same joints as the pronation and supination of the wrist, which help the manipulator to create a path. We propose a new gripper with 3 fingers and 12 degrees of freedom to implement these features; this gripper is composed of grasping and screwdriving parts. The grasping part has two fingers with a roll-yaw-pitch-pitch joint configuration. Its pitch joint implements wrist pronation and supination. The screwdriving part includes one finger with a roll-pitch-pitch joint configuration and a flexible link that can comply with the environment; this facilitates compliance based on the direction of the external force. The end of the screwdriving finger has a motor with a hex key attached, and an insert tip is attached to the back of the motor. A prototype of the proposed screwdriving gripper is manufactured, and a strategy for assembly using a prototype is proposed. The features of the proposed gripper are verified through screwdriving task experiments using a cooperative robotic arm. The experiments showed that the screwdriving gripper can perform tasks required for the assembly such as pick and place, peg-in-hole, and screwdriving. Full article
(This article belongs to the Topic Robotics and Automation in Smart Manufacturing Systems)
Show Figures

Figure 1

Figure 1
<p>Example of assembly that requires screwdriving and peg-in-hole tasks as well as pick and place.</p>
Full article ">Figure 2
<p>Screwdriving task performed by a human and a gripper that mimics a human two-handed operation.</p>
Full article ">Figure 3
<p>Kinematic structure of the screwdriving gripper. The red, blue, and green cylinders represent roll, yaw, and pitch, respectively. The light blue cylinder is a screwdriving motor.</p>
Full article ">Figure 4
<p>Path planning of grasping parts for screwdriving and alignment of screws and aligning guides.</p>
Full article ">Figure 5
<p>Screwdriving part of the gripper including the screwdriving motor and flexible link.</p>
Full article ">Figure 6
<p>Flexible link shape according to external force: (<b>a</b>) flexible link deformed by force in a direction opposite to that of gravity (<b>b</b>) flexible link supported on link guide.</p>
Full article ">Figure 7
<p>Procedure of screwdriving and peg-in-hole tasks using the screwdriving gripper.</p>
Full article ">Figure 8
<p>Results of the screwdriving motor operation strategy experiment: desired current, measured current, and angular velocity.</p>
Full article ">Figure 9
<p>Manufactured screwdriving gripper and parts: (<b>a</b>) Align guide and guide sleeve. (<b>b</b>) Flexible link (<b>c</b>) Fingertip, and (<b>d</b>) Robotic screwdriving gripper.</p>
Full article ">Figure 10
<p>Experimental environment of the screw driving task with a cooperative robot arm.</p>
Full article ">Figure 11
<p>Snapshots of the screwdriving task. (<b>a</b>) Initial state and detecting object (<b>b</b>) Object grip (<b>c</b>) Aligning (<b>d</b>) Finding hole and aligning (<b>e</b>) Screwdriving. (<b>f</b>) Complete process.</p>
Full article ">Figure 12
<p>Result of screwdriving task. (<b>a</b>) Center of grasping finger and a manipulator path. (<b>b</b>) Position of the screwdriving finger and velocity of screwdriving motor.</p>
Full article ">
22 pages, 10087 KiB  
Article
Multi-Fidelity Information Fusion to Model the Position-Dependent Modal Properties of Milling Robots
by Maximilian Busch and Michael F. Zaeh
Robotics 2022, 11(1), 17; https://doi.org/10.3390/robotics11010017 - 21 Jan 2022
Cited by 6 | Viewed by 3208
Abstract
Robotic machining is a promising technology for post-processing large additively manufactured parts. However, the applicability and efficiency of robot-based machining processes are restricted by dynamic instabilities (e.g., due to external excitation or regenerative chatter). To prevent such instabilities, the pose-dependent structural dynamics of [...] Read more.
Robotic machining is a promising technology for post-processing large additively manufactured parts. However, the applicability and efficiency of robot-based machining processes are restricted by dynamic instabilities (e.g., due to external excitation or regenerative chatter). To prevent such instabilities, the pose-dependent structural dynamics of the robot must be accurately modeled. To do so, a novel data-driven information fusion approach is proposed: the spatial behavior of the robot’s modal parameters is modeled in a horizontal plane using probabilistic machine learning techniques. A probabilistic formulation allows an estimation of the model uncertainties as well, which increases the model reliability and robustness. To increase the predictive performance, an information fusion scheme is leveraged: information from a rigid body model of the fundamental behavior of the robot’s structural dynamics is fused with a limited number of estimated modal properties from experimental modal analysis. The results indicate that such an approach enables a user-friendly and efficient modeling method and provides reliable predictions of the directional robot dynamics within a large modeling domain. Full article
Show Figures

Figure 1

Figure 1
<p>Methodology of the approach.</p>
Full article ">Figure 2
<p>Measurement positions in the workspace of the robot and the base coordinate system.</p>
Full article ">Figure 3
<p>Measured directional frequency response functions and synthesized estimation by Least-Squares Complex Frequency Domain (LSCF) algorithm.</p>
Full article ">Figure 4
<p>Spatial behavior of modal parameters of first vibration mode.</p>
Full article ">Figure 5
<p>Training and testing data points for sampling strategies <math display="inline"><semantics> <msub> <mi mathvariant="normal">B</mi> <mn>3</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi mathvariant="normal">B</mi> <mn>5</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi mathvariant="normal">B</mi> <mn>7</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi mathvariant="normal">B</mi> <mn>10</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi mathvariant="normal">B</mi> <mn>15</mn> </msub> </semantics></math> (A).</p>
Full article ">Figure 6
<p>Spatial behavior of first natural frequency as measured (column <b>a</b>)) and predicted by simulation (column <b>b</b>)). There is a linear dependency between simulation results and measurements (column <b>c</b>)).</p>
Full article ">Figure 7
<p>Benchmark metrics for each model’s prediction at testing data points.</p>
Full article ">Figure 8
<p>Benchmark of conventional Gaussian process regression with an RBF kernel and the two multi-fidelity schemes AR1 and NARGP using iterative sampling of training data points.</p>
Full article ">Figure 9
<p>Comparison of prediction accuracy on testing data of <math display="inline"><semantics> <msub> <mi>ω</mi> <mn>2</mn> </msub> </semantics></math> using sampling strategy <math display="inline"><semantics> <msub> <mi mathvariant="normal">B</mi> <mn>3</mn> </msub> </semantics></math>, including model’s predictive uncertainty in form of the 95% prediction interval.</p>
Full article ">Figure 10
<p>Benchmark between different kernel designs for modeling damping ratios.</p>
Full article ">Figure 11
<p>Benchmarks of different kernel designs for modeling spatial behavior of <math display="inline"><semantics> <msub> <mi mathvariant="script">R</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>x</mi> <mi>x</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi mathvariant="script">R</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>y</mi> <mi>y</mi> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 12
<p>Measured directional frequency response functions and model predictions at testing point <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">x</mi> <mi>T</mi> </msub> <mo>=</mo> <msup> <mrow> <mo>[</mo> <mn>1.175</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>,</mo> <mn>1.0</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>,</mo> <mn>1.24</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>]</mo> </mrow> <mi>T</mi> </msup> </mrow> </semantics></math> using sampling strategy <math display="inline"><semantics> <msub> <mi>B</mi> <mn>5</mn> </msub> </semantics></math> (for simplicity, only 300 of 10,000 Monte Carlo samples are illustrated).</p>
Full article ">Figure A1
<p>Spatial behavior of cross residuals of first vibration mode.</p>
Full article ">Figure A2
<p>Rigid body model of the robot. The orientation of the six local body coordinate systems are illustrated in colored lines (<span class="html-fig-inline" id="robotics-11-00017-i001"> <img alt="Robotics 11 00017 i001" src="/robotics/robotics-11-00017/article_deploy/html/images/robotics-11-00017-i001.png"/></span>= <span class="html-italic">x</span>, <span class="html-fig-inline" id="robotics-11-00017-i002"> <img alt="Robotics 11 00017 i002" src="/robotics/robotics-11-00017/article_deploy/html/images/robotics-11-00017-i002.png"/></span>= <span class="html-italic">y</span>, <span class="html-fig-inline" id="robotics-11-00017-i003"> <img alt="Robotics 11 00017 i003" src="/robotics/robotics-11-00017/article_deploy/html/images/robotics-11-00017-i003.png"/></span>= <span class="html-italic">z</span>.)</p>
Full article ">Figure A3
<p>The first four mode shapes (first mode 1 in <b>a</b>), second mode in <b>b</b>) third mode in <b>c</b>), fourth mode in <b>d</b>)), simulated with the rigid body model at <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">x</mi> <mi>T</mi> </msub> </semantics></math>.</p>
Full article ">Figure A4
<p>Benchmarks of different kernel designs for modeling spatial behavior of <math display="inline"><semantics> <msub> <mi mathvariant="script">R</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>x</mi> <mi>x</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi mathvariant="script">R</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>y</mi> <mi>y</mi> </mrow> </msub> </semantics></math> based on <math display="inline"><semantics> <msup> <mi>R</mi> <mn>2</mn> </msup> </semantics></math> (<math display="inline"><semantics> <msup> <mi>R</mi> <mn>2</mn> </msup> </semantics></math> values lower than 0.1 are not displayed to their full extend for better comprehensibility).</p>
Full article ">
15 pages, 3316 KiB  
Article
Optimizing Cycle Time of Industrial Robotic Tasks with Multiple Feasible Configurations at the Working Points
by Matteo Bottin, Giovanni Boschetti and Giulio Rosati
Robotics 2022, 11(1), 16; https://doi.org/10.3390/robotics11010016 - 15 Jan 2022
Cited by 2 | Viewed by 3671
Abstract
Industrial robot applications should be designed to allow the robot to provide the best performance for increasing throughput. In this regard, both trajectory and task order optimization are crucial, since they can heavily impact cycle time. Moreover, it is very common for a [...] Read more.
Industrial robot applications should be designed to allow the robot to provide the best performance for increasing throughput. In this regard, both trajectory and task order optimization are crucial, since they can heavily impact cycle time. Moreover, it is very common for a robotic application to be kinematically or functionally redundant so that multiple arm configurations may fulfill the same task at the working points. In this context, even if the working cycle is composed of a small number of points, the number of possible sequences can be very high, so that the robot programmer usually cannot evaluate them all to obtain the shortest possible cycle time. One of the most well-known problems used to define the optimal task order is the Travelling Salesman Problem (TSP), but in its original formulation, it does not allow to consider different robot configurations at the same working point. This paper aims at overcoming TSP limitations by adding some mathematical and conceptual constraints to the problem. With such improvements, TSP can be used successfully to optimize the cycle time of industrial robotic tasks where multiple configurations are allowed at the working points. Simulation and experimental results are presented to assess how cost (cycle time) and computational time are influenced by the proposed implementation. Full article
(This article belongs to the Topic Industrial Robotics)
Show Figures

Figure 1

Figure 1
<p>In this case, the same image of the workpiece (yellow) can acquired by the camera (red), rotated by 180 degrees, with the robot in both configurations shown.</p>
Full article ">Figure 2
<p>Number of possible paths for a robotic tasks with multiple feasible configurations at the working points. (<b>a</b>) Free sequence (Equation (<a href="#FD2-robotics-11-00016" class="html-disp-formula">2</a>), <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>g</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mn>1</mn> </msub> <mo>=</mo> <mi>k</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>m</mi> <mn>1</mn> </msub> <mo>=</mo> <mi>N</mi> </mrow> </semantics></math>). The line with <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> is coincident with <math display="inline"><semantics> <msub> <mi>n</mi> <mi>N</mi> </msub> </semantics></math> (Equation (<a href="#FD1-robotics-11-00016" class="html-disp-formula">1</a>)); (<b>b</b>) number of possible paths with a fixed sequence (Equation (<a href="#FD3-robotics-11-00016" class="html-disp-formula">3</a>), <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>g</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mn>1</mn> </msub> <mo>=</mo> <mi>k</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>m</mi> <mn>1</mn> </msub> <mo>=</mo> <mi>N</mi> </mrow> </semantics></math>).</p>
Full article ">Figure 3
<p>Ratio between the cost of the minimum solution with multiple configurations (<math display="inline"><semantics> <msub> <mi>t</mi> <mi>k</mi> </msub> </semantics></math>) and the corresponding solution with <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> (<math display="inline"><semantics> <msub> <mi>t</mi> <mn>1</mn> </msub> </semantics></math>): (<b>a</b>) no fixed point sequence; (<b>b</b>) fixed point sequence.</p>
Full article ">Figure 4
<p>Example of a cluster. The TSP solution enters the cluster and visits all the positions within the cluster before moving to another cluster.</p>
Full article ">Figure 5
<p>Example of connections within and between clusters. Each cluster comprises the feasible configurations of a working point, together with their mirror copies (identified with an asterisk <math display="inline"><semantics> <msup> <mrow/> <mo>*</mo> </msup> </semantics></math>).</p>
Full article ">Figure 6
<p>Example of clusters for a working cycle made of a set of Cartesian paths. Each cluster represents a path, and comprises all possible configurations of the starting point (e.g., <math display="inline"><semantics> <msub> <mi>P</mi> <mrow> <mn>1</mn> <mo>,</mo> <mi>i</mi> </mrow> </msub> </semantics></math>) and of the ending point (e.g., <math display="inline"><semantics> <msub> <mi>P</mi> <mrow> <mn>2</mn> <mo>,</mo> <mi>i</mi> </mrow> </msub> </semantics></math>). The points are connected in such a way that the TSP will exit the cluster (i.e., the robot will exit the path) with the same configuration used while entering the cluster (i.e., with the same configuration held by the robot while entering the path).</p>
Full article ">Figure 7
<p>Optimization process by means of the modified TSP.</p>
Full article ">Figure 8
<p>Simulation results with <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>: (<b>a</b>) normalized solution cost and (<b>b</b>) computational time (log scale).</p>
Full article ">Figure 9
<p>Experimental setup including one Adept s650 anthropomorphic manipulator, one AVT Pike F-505 industrial camera, and one piece to be inspected (the pyramid trunk, to the left).</p>
Full article ">Figure 10
<p>Monochromatic images of the symbols placed on the sides of the trunk pyramid.</p>
Full article ">Figure 11
<p>Optimized robot movement for the case study (<b>a</b>) and order defined by the operator (<b>b</b>).</p>
Full article ">Figure 12
<p>Joint displacements (<b>a</b>) and speeds (<b>b</b>) for the Test 1, Scenarios 1 (blue) and 3 (red) of <a href="#robotics-11-00016-t002" class="html-table">Table 2</a>.</p>
Full article ">
20 pages, 12776 KiB  
Article
A Recursive Algorithm for the Forward Kinematic Analysis of Robotic Systems Using Euler Angles
by Fernando Gonçalves, Tiago Ribeiro, António Fernando Ribeiro, Gil Lopes and Paulo Flores
Robotics 2022, 11(1), 15; https://doi.org/10.3390/robotics11010015 - 14 Jan 2022
Cited by 18 | Viewed by 5614
Abstract
Forward kinematics is one of the main research fields in robotics, where the goal is to obtain the position of a robot’s end-effector from its joint parameters. This work presents a method for achieving this using a recursive algorithm that builds a 3D [...] Read more.
Forward kinematics is one of the main research fields in robotics, where the goal is to obtain the position of a robot’s end-effector from its joint parameters. This work presents a method for achieving this using a recursive algorithm that builds a 3D computational model from the configuration of a robotic system. The orientation of the robot’s links is determined from the joint angles using Euler Angles and rotation matrices. Kinematic links are modeled sequentially, the properties of each link are defined by its geometry, the geometry of its predecessor in the kinematic chain, and the configuration of the joint between them. This makes this method ideal for tackling serial kinematic chains. The proposed method is advantageous due to its theoretical increase in computational efficiency, ease of implementation, and simple interpretation of the geometric operations. This method is tested and validated by modeling a human-inspired robotic mobile manipulator (CHARMIE) in Python. Full article
(This article belongs to the Special Issue Advances in Industrial Robotics and Intelligent Systems)
Show Figures

Figure 1

Figure 1
<p>3D model of the CHARMIE mobile manipulator in a CAD software (<b>left</b>) and in the developed kinematics simulation environment (<b>right</b>). On the left, the robot’s kinematic links are color coded and named.</p>
Full article ">Figure 2
<p>Flowchart of the developed recursive algorithm for the computation of forward kinematics.</p>
Full article ">Figure 3
<p>Examples of robots modeled using the proposed recursive algorithm: (<b>a</b>) a quadruped robot; (<b>b</b>) a fixed serial manipulator; (<b>c</b>) a hexapod robot.</p>
Full article ">Figure 4
<p>Starting configuration for the application example of the recursive algorithm. The robot has been completely modeled with the exception of its head (link <math display="inline"><semantics> <mrow> <mn>8</mn> <mi>c</mi> </mrow> </semantics></math>).</p>
Full article ">Figure 5
<p>3D Model of the robot’s head (link <math display="inline"><semantics> <mrow> <mn>8</mn> <mi>c</mi> </mrow> </semantics></math>) in CAD (<b>left</b>), and in the computational simulation in the <math display="inline"><semantics> <msubsup> <mi>x</mi> <mrow> <mn>8</mn> <mi>c</mi> </mrow> <mo>′</mo> </msubsup> </semantics></math><math display="inline"><semantics> <msubsup> <mi>y</mi> <mrow> <mn>8</mn> <mi>c</mi> </mrow> <mo>′</mo> </msubsup> </semantics></math><math display="inline"><semantics> <msubsup> <mi>z</mi> <mrow> <mn>8</mn> <mi>c</mi> </mrow> <mo>′</mo> </msubsup> </semantics></math> local axes (<b>right</b>). With the exception of the axes of the global reference (<math display="inline"><semantics> <mrow> <mi>x</mi> <mi>y</mi> <mi>z</mi> </mrow> </semantics></math>), represented points and axes have sub-index (<math display="inline"><semantics> <mrow> <mn>8</mn> <mi>c</mi> </mrow> </semantics></math>) but this notation was omitted for simplification.</p>
Full article ">Figure 6
<p>3D Model of the robot’s head in the computational simulation in the rotated <math display="inline"><semantics> <msubsup> <mi>x</mi> <mrow> <mn>8</mn> <mi>c</mi> </mrow> <mrow> <mo>′</mo> <mo>′</mo> </mrow> </msubsup> </semantics></math><math display="inline"><semantics> <msubsup> <mi>y</mi> <mrow> <mn>8</mn> <mi>c</mi> </mrow> <mrow> <mo>′</mo> <mo>′</mo> </mrow> </msubsup> </semantics></math><math display="inline"><semantics> <msubsup> <mi>z</mi> <mrow> <mn>8</mn> <mi>c</mi> </mrow> <mrow> <mo>′</mo> <mo>′</mo> </mrow> </msubsup> </semantics></math> axes. The labeling of points A–G was omitted. The non-rotated position of the head is shown in dotted grey lines. With the exception of the axes of the global reference (<math display="inline"><semantics> <mrow> <mi>x</mi> <mi>y</mi> <mi>z</mi> </mrow> </semantics></math>), all points and axes represented have sub-index (<math display="inline"><semantics> <mrow> <mn>8</mn> <mi>c</mi> </mrow> </semantics></math>) but this notation was omitted for simplification.</p>
Full article ">Figure 7
<p>3D Model of the robot’s head in the computational simulation in the <span class="html-italic">x</span><span class="html-italic">y</span><span class="html-italic">z</span> global axes. The labeling of points A–I was omitted. The head after rotation, but before translation, is shown in dotted grey lines (<b>left</b>). On the (<b>right</b>), the head is shown with the whole robot also being visible. With the exception of the axes of the global reference (<math display="inline"><semantics> <mrow> <mi>x</mi> <mi>y</mi> <mi>z</mi> </mrow> </semantics></math>), all points and axes represented have sub-index (<math display="inline"><semantics> <mrow> <mn>8</mn> <mi>c</mi> </mrow> </semantics></math>) but this notation was omitted for simplification.</p>
Full article ">Figure 8
<p>Schematic representation of the kinematic chain of the CHARMIE mobile manipulator.</p>
Full article ">Figure 9
<p>Schematic representation of the geometry of joint 4 between link 3 and link 4 of the CHARMIE robot.</p>
Full article ">Figure 10
<p>Schematic representation of the geometry of joint 5 between link 4 and link 5 of the CHARMIE robot.</p>
Full article ">Figure 11
<p>Comparison of the robot’s configuration for the same problem analysed using the recursive algorithm presented in this paper (<b>left</b>), and WorkingModel4D (<b>right</b>).</p>
Full article ">Figure 12
<p>Comparison between the results for the end-effector position <math display="inline"><semantics> <mrow> <mn>13</mn> <mi>a</mi> </mrow> </semantics></math> for the same conditions analysed using both the recursive algorithm and WorkingModel4D.</p>
Full article ">
17 pages, 5657 KiB  
Article
Effects on Trajectory of a Spear Using Movement of Robotic Fish Equipped with Spear Shooting Mechanism
by Naoki Kawasaki, Kazuki Tonomura, Masashi Ohara, Ayane Shinojima and Yogo Takada
Robotics 2022, 11(1), 14; https://doi.org/10.3390/robotics11010014 - 11 Jan 2022
Viewed by 2948
Abstract
In Japan, the disruption of ecosystems caused by alien fish in lakes and ponds is a major issue. To address this problem, we propose that the robotic fish COMET can assist in alien fish extermination by adding the function of spear shooting. The [...] Read more.
In Japan, the disruption of ecosystems caused by alien fish in lakes and ponds is a major issue. To address this problem, we propose that the robotic fish COMET can assist in alien fish extermination by adding the function of spear shooting. The way of extermination is that when COMET finds an alien fish, let COMET approach an alien fish without being wary it and spear it. In this study, we investigated the spear shooting process under different movement conditions to determine the impact on the accuracy of the trajectory of the spear. The results confirmed that a certain set of conditions can improve the accuracy of hitting the target with a spear using specific movements of the robotic fish. Full article
(This article belongs to the Topic Motion Planning and Control for Robotics)
Show Figures

Figure 1

Figure 1
<p>Structure of COMET.</p>
Full article ">Figure 2
<p>Image of COMET.</p>
Full article ">Figure 3
<p>Structure of the shooting mechanism.</p>
Full article ">Figure 4
<p>Structure of the spear.</p>
Full article ">Figure 5
<p>Image of the spear.</p>
Full article ">Figure 6
<p>Structure of experimental rotational testing machine: (<b>a</b>) rolling motion; (<b>b</b>) yawing motion.</p>
Full article ">Figure 7
<p>Experimental environment for the spear shooting.</p>
Full article ">Figure 8
<p>Trajectories of spear shot from stationary shooting mechanism 30 times: (<b>a</b>) top view and (<b>b</b>) side view.</p>
Full article ">Figure 9
<p>Trajectories of spear shot from stationary shooting mechanism 12 times: (<b>a</b>) top view and (<b>b</b>) side view.</p>
Full article ">Figure 10
<p>Trajectories of spear shot from rotating shooting mechanism rolling to the left (RL): (<b>a</b>) top view and (<b>b</b>) side view.</p>
Full article ">Figure 11
<p>Trajectories of spear shot from rotating shooting mechanism rolling to the right (RR): (<b>a</b>) top view and (<b>b</b>) side view.</p>
Full article ">Figure 12
<p>Trajectories of spear shot from rotating shooting mechanism yawing to the left (YL): (<b>a</b>) top view and (<b>b</b>) side view.</p>
Full article ">Figure 13
<p>Trajectories of spear shot from rotating shooting mechanism yawing to the right (YR): (<b>a</b>) top view and (<b>b</b>) side view.</p>
Full article ">Figure 14
<p>Kernel density estimation of the target X-coordinates.</p>
Full article ">Figure 15
<p>Definition sketch of velocity and force components.</p>
Full article ">Figure 16
<p>Definition sketch of initial torque.</p>
Full article ">Figure 17
<p>Simulated trajectory of spear shot from stationary shooting mechanism.</p>
Full article ">Figure 18
<p>Simulated trajectory of spear shot from rotating shooting mechanism with YR.</p>
Full article ">
16 pages, 4719 KiB  
Article
Mixed Position and Twist Space Synthesis of 3R Chains
by Neda Hassanzadeh and Alba Perez-Gracia
Robotics 2022, 11(1), 13; https://doi.org/10.3390/robotics11010013 - 10 Jan 2022
Cited by 2 | Viewed by 3147
Abstract
Mixed-position kinematic synthesis is used to not only reach a certain number of precision positions, but also impose certain instantaneous motion conditions at those positions. In the traditional approach, one end-effector twist is defined at each precision position in order to achieve better [...] Read more.
Mixed-position kinematic synthesis is used to not only reach a certain number of precision positions, but also impose certain instantaneous motion conditions at those positions. In the traditional approach, one end-effector twist is defined at each precision position in order to achieve better guidance of the end-effector along a desired trajectory. For one-degree-of-freedom linkages, that suffices to fully specify the trajectory locally. However, for systems with a higher number of degrees of freedom, such as robotic systems, it is possible to specify a complete higher-dimensional subspace of potential twists at particular positions. In this work, we focus on the 3R serial chain. We study the three-dimensional subspaces of twists that can be defined and set the mixed-position equations to synthesize the chain. The number and type of twist systems that a chain can generate depend on the topology of the chain; we find that the spatial 3R chain can generate seven different fully defined twist systems. Finally, examples of synthesis with several fully defined and partially defined twist spaces are presented. We show that it is possible to synthesize 3R chains for feasible subspaces of different types. This allows a complete definition of potential motions at particular positions, which could be used for the design of precise interaction with contact surfaces. Full article
(This article belongs to the Special Issue Kinematics and Robot Design IV, KaRD2021)
Show Figures

Figure 1

Figure 1
<p>The spatial 3R chain. Forward kinematics is the product of the relative displacement <math display="inline"><semantics> <msub> <mi>D</mi> <mrow> <mn>3</mn> <mi>R</mi> </mrow> </msub> </semantics></math> and the displacement to the reference configuration <math display="inline"><semantics> <msub> <mi>D</mi> <mn>0</mn> </msub> </semantics></math>. Twist of the end-effector expressed in the fixed frame is <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">W</mi> <mrow> <mi>e</mi> <mi>e</mi> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 2
<p>Locus of the direction of the axes from the twist conditions. A general case (<b>left</b>) and a case in which the elliptic cone degenerates to two planes (<b>right</b>).</p>
Full article ">Figure 3
<p>Solutions can be found as the intersection of the elliptic cone, the unit sphere and the relative position condition. Here, the curves are depicted for one solution value of the angle <math display="inline"><semantics> <msub> <mi>θ</mi> <mn>2</mn> </msub> </semantics></math>. Green curve: intersection of the elliptic cone with the unit sphere. Red curve: intersection of the elliptic cone and the relative position condition. Blue curve: intersection of the unit sphere and the relative position condition. The intersections of the curves are the solutions for the direction of the second joint axes, <math display="inline"><semantics> <msub> <mi>s</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 4
<p>Example 1: The 3R robot reaching the two positions shown in <a href="#robotics-11-00013-t003" class="html-table">Table 3</a>. The joint axes are shown as dashed lines and link colors are those of <a href="#robotics-11-00013-f001" class="html-fig">Figure 1</a>. Notice that two of the axes intersect at a point.</p>
Full article ">Figure 5
<p>Example 1: Twists of the end effector. Left: subspaces of dimension 2 corresponding to the angular velocities of the end-effector at positions 1 and 2. Right: plots corresponding to the linear velocities. At position 2, the linear velocity subspace is of dimension 1.</p>
Full article ">Figure 6
<p>Example 1: A second 3R robot solution for Example 1 showing the first and third axes intersection at the second position.</p>
Full article ">Figure 7
<p>Example 2: One 3R robot solution at the first and second positions (<b>top</b>) and during the movement (<b>bottom</b>).</p>
Full article ">Figure 8
<p>Example 3: Solution 3R robot for a specified angular twist space at the origin. Dotted lines correspond to the joint axes at each position.</p>
Full article ">
9 pages, 1484 KiB  
Article
Optimization of Link Length Fitting between an Operator and a Robot with Digital Annealer for a Leader-Follower Operation
by Takuya Otani, Atsuo Takanishi, Makoto Nakamura and Koichi Kimura
Robotics 2022, 11(1), 12; https://doi.org/10.3390/robotics11010012 - 8 Jan 2022
Cited by 3 | Viewed by 3859
Abstract
In recent years, the teleoperation of robots has become widespread in practical use. However, in some current modes of robot operation, such as leader-follower control, the operator must use visual information to recognize the physical deviation between him/herself and the robot, and correct [...] Read more.
In recent years, the teleoperation of robots has become widespread in practical use. However, in some current modes of robot operation, such as leader-follower control, the operator must use visual information to recognize the physical deviation between him/herself and the robot, and correct the operation instructions sequentially, which limits movement speed and places a heavy burden on the operator. In this study, we propose a leader-follower control parameter optimization method for the feedforward correction necessitated by deviations in the link length between the robot and the operator. To optimize the parameters, we used the Digital Annealer developed by Fujitsu Ltd., which can solve the combinatorial optimization problem at high speed. The main objective was to minimize the difference between the hand coordinates target and the actual hand position of the robot. In simulations, the proposed method decreased the difference between the hand position of the robot and the target. Moreover, this method enables optimum operation, in part by eliminating the need for the operator to maintain an unreasonable posture, as in some robots the operator’s hand position is unsuitable for achieving the objective. Full article
(This article belongs to the Section Intelligent Robots and Mechatronics)
Show Figures

Figure 1

Figure 1
<p>Schematic view of conventional leader-follower teleoperation for a robot arm with a different link ratio. Black lines represent the operator’s arm, and blue lines represent the robot arm. Although the operator wants to reach just in front of him/herself, the robot hand does not reach this point. Normally, geometric compensation is made by measuring the human link length or by practice with visual feedback.</p>
Full article ">Figure 2
<p>Schematic view of proposed leader-follower teleoperation for a robot arm which has a different link ratio. Black lines represent the operator’s arm, and red lines represent the robot arm. For reaching the operator’s objective, the robot arm moves with the compensation parameters for the joint angles.</p>
Full article ">Figure 3
<p>Simulation result for Goal A: yellow point represents the robot hand’s goal position; green links represent the human arm; blue links represent the robot arm without compensation; red links represent a robot arm with compensation.</p>
Full article ">Figure 4
<p>Simulation result for Goal B: yellow point represents the robot hand’s goal position; green links represent the human arm; blue links represent the robot arm without compensation; red links represent the robot arm with compensation.</p>
Full article ">
16 pages, 3963 KiB  
Article
Design of a Labriform-Steering Underwater Robot Using a Multiphysics Simulation Environment
by Daniele Costa, Cecilia Scoccia, Matteo Palpacelli, Massimo Callegari and David Scaradozzi
Robotics 2022, 11(1), 11; https://doi.org/10.3390/robotics11010011 - 7 Jan 2022
Cited by 3 | Viewed by 4388
Abstract
Bio-inspired solutions devised for Autonomous Underwater Robots are currently investigated by researchers as a source of propulsive improvement. To address this ambitious objective, the authors have designed a carangiform swimming robot, which represents a compromise in terms of efficiency and maximum velocity. The [...] Read more.
Bio-inspired solutions devised for Autonomous Underwater Robots are currently investigated by researchers as a source of propulsive improvement. To address this ambitious objective, the authors have designed a carangiform swimming robot, which represents a compromise in terms of efficiency and maximum velocity. The requirements of stabilizing a course and performing turns were not met in their previous works. Therefore, the aim of this paper is to improve the vehicle maneuvering capabilities by means of a novel transmission system capable of transforming the constant angular velocity of a single rotary actuator into the pitching–yawing rotation of fish pectoral fins. Here, the biomimetic thrusters exploit the drag-based momentum transfer mechanism of labriform swimmers to generate the necessary steering torque. Aside from inertia and encumbrance reduction, the main improvement of this solution is the inherent synchronization of the system granted by the mechanism’s kinematics. The system was sized by using the experimental results collected by biologists and then integrated in a multiphysics simulation environment to predict the resulting maneuvering performance. Full article
(This article belongs to the Special Issue Kinematics and Robot Design IV, KaRD2021)
Show Figures

Figure 1

Figure 1
<p>Fish morphological features commonly depicted in the literature.</p>
Full article ">Figure 2
<p>Pectoral fin kinematics during power strokes (<b>a</b>) and recovery strokes (<b>b</b>) in drag-based labriform locomotion; pectoral fin kinematics in lift-based labriform locomotion (<b>c</b>).</p>
Full article ">Figure 3
<p>Geometry of power and recovery strokes: the fin was drawn thin during the power stroke and flat during the recovery stroke to show its perpendicular and parallel orientation with respect to the horizontal plane (<b>a</b>); blade-element kinematics [<a href="#B6-robotics-11-00011" class="html-bibr">6</a>] (<b>b</b>).</p>
Full article ">Figure 4
<p>Fin beat-cycle: power stroke: the fins are kept perpendicular to the horizontal plane as they rotate counterclockwise about the fish yaw axis at high speed (<b>a</b>); feathering recovery stroke: the fins turn around their pitch axis, flattening on the horizontal plane, as they slowly rotate clockwise about the yaw axis (<b>b</b>); main recovery stroke: the fins are kept parallel to the horizontal plane as they continue to rotate clockwise about the yaw axis (<b>c</b>); unfeathering recovery stroke: the fins turn around their yaw axis until they are perpendicular to the horizontal plane, as they complete the rotation around the yaw axis, thus returning in the initial position and configuration (<b>d</b>).</p>
Full article ">Figure 5
<p>Sub-system A (<b>a</b>); Sub-system B (<b>b</b>).</p>
Full article ">Figure 6
<p>Transmission mechanism and pectoral fin complete assembly: frame (orange), rotary motor and input shaft (grey), sub-system A (dark blue), sub-system B (red), shaft C and Double Cardan joints (cyan), fin shaft D (pearl); front view (<b>a</b>), back view (<b>b</b>).</p>
Full article ">Figure 7
<p>Double Cardan joint assembly.</p>
Full article ">Figure 8
<p>Oscillating glyph at the beginning of a power stroke.</p>
Full article ">Figure 9
<p>Fin angular position and orientation in a fin beat-cycle.</p>
Full article ">Figure 10
<p>Attack angle <span class="html-italic">α</span> during a power stroke: right fin (<b>a</b>) and left fin (<b>b</b>); the values were computed when the swimming velocity <span class="html-italic">V</span> was 0.125 BL/s (red), 0.25 BL/s (blue), and 0.5 BL/s (green). The markers on the curves refer to the innermost (circle), median (plus sign), and outermost (asterisk) blade-element.</p>
Full article ">Figure 11
<p>Cylindrical body subject to the hydrodynamic loads (red) and to the propulsive/maneuvering forces generated by the caudal and pectoral fins (blue) (<b>a</b>); robotic fish subject to the hydrodynamic and propulsive loads applied to the tail sections and to the rigid forebody (<b>b</b>).</p>
Full article ">Figure 12
<p>Multibody model of the underwater robot.</p>
Full article ">Figure 13
<p>Physical prototype of the transmission system manufactured by SLA.</p>
Full article ">
17 pages, 4371 KiB  
Article
Dimensional Synthesis of a Novel 3-URU Translational Manipulator Implemented through a Novel Method
by Raffaele Di Gregorio
Robotics 2022, 11(1), 10; https://doi.org/10.3390/robotics11010010 - 5 Jan 2022
Cited by 3 | Viewed by 3004
Abstract
A dimensional synthesis of parallel manipulators (PMs) consists of determining the values of the geometric parameters that affect the platform motion so that a useful workspace with assigned sizes can be suitably located in a free-from-singularity region of its operational space. The main [...] Read more.
A dimensional synthesis of parallel manipulators (PMs) consists of determining the values of the geometric parameters that affect the platform motion so that a useful workspace with assigned sizes can be suitably located in a free-from-singularity region of its operational space. The main goal of this preliminary dimensioning is to keep the PM far enough from singularities to avoid high internal loads in the links and guarantee a good positioning precision (i.e., for getting good kinematic performances). This paper presents a novel method for the dimensional synthesis of translational PMs (TPMs) and applies it to a TPM previously proposed by the author. The proposed method, which is based on Jacobians’ properties, exploits the fact that TPM parallel Jacobians are block diagonal matrices to overcome typical drawbacks of indices based on Jacobian properties. The proposed method can be also applied to all the lower-mobility PMs with block diagonal Jacobians that separate platform rotations from platform translations (e.g., parallel wrists). Full article
(This article belongs to the Special Issue Kinematics and Robot Design IV, KaRD2021)
Show Figures

Figure 1

Figure 1
<p>LaMaViP 3-URU: (<b>a</b>) overall scheme and notations, (<b>b</b>) detailed scheme of the <span class="html-italic">i-th</span> limb (figure reproduced from [<a href="#B38-robotics-11-00010" class="html-bibr">38</a>]).</p>
Full article ">Figure 2
<p>An octant of Ox<sub>b</sub>y<sub>b</sub>z<sub>b</sub>: (<b>a</b>) line x = y = z and plane passing through point D = (d,d,d)<sup>T</sup> perpendicular to that line, (<b>b</b>) top view along the line x = y = z containing the circumference, centered at D, with radius r, lying on the plane perpendicular to the line x = y = z.</p>
Full article ">Figure 3
<p>Diagram of the function defined by Equation (27).</p>
Full article ">Figure 4
<p>Values of k<sub>v</sub> along the line x = y = z= −d for different values of ρ and limbs assembled so that the index j appearing in Equation (21) is equal to 0 in the cases: (<b>a</b>) μ = 0.5, ν = 1; (<b>b</b>) μ = 0.5, ν = 1.5; (<b>c</b>) μ = 0.5, ν = 2; (<b>d</b>) μ = 1, ν = 2.</p>
Full article ">Figure 5
<p>Values of k<sub>v</sub> on circumferences (see <a href="#robotics-11-00010-f002" class="html-fig">Figure 2</a>) with radius r, centered at point D’ = (−d, −d, −d)<sup>T</sup> of the line x = y = z, that lie on planes perpendicular to the same line, in the case ρ = 4d<sub>b</sub> for (<b>a</b>,<b>d</b>,<b>g</b>,<b>l</b>) (d/d<sub>b</sub>) = (d<sub>max</sub>/d<sub>b</sub>)−0.5, (<b>b</b>,<b>e</b>,<b>h</b>,<b>m</b>) (d/d<sub>b</sub>) = (d<sub>max</sub>/d<sub>b</sub>), (<b>c</b>,<b>f</b>,<b>i</b>,<b>n</b>) (d/d<sub>b</sub>) = (d<sub>max</sub>/d<sub>b</sub>) + 0.5, and the geometries (<b>a</b>,<b>b</b>,<b>c</b>) μ = 0.5, ν = 1, (<b>d</b>,<b>e</b>,<b>f</b>) μ = 0.5, ν = 1.5, (<b>g</b>,<b>h</b>,<b>i</b>) μ = 0.5, ν = 2, and (<b>l</b>,<b>m</b>,<b>n</b>) μ = 1, ν = 2.</p>
Full article ">Figure 6
<p>View of the <span class="html-italic">i-th</span> limb in the plane perpendicular to the unit vector <b>g</b><sub>i</sub> represented together with the force F<sub>i</sub><b>v</b><sub>i</sub> it applies to the platform and the torque M<sub>i</sub><b>g</b><sub>i</sub> applied by the actuator in the actuated joint.</p>
Full article ">Figure 7
<p>Cylindrical workspace: view of the <span class="html-italic">i-th</span> limb in the meridian plane passing through the line x = y = z and containing the coordinate axis of Ox<sub>b</sub>y<sub>b</sub>z<sub>b</sub> that is parallel to the unit vector <b>e</b><sub>i</sub>.</p>
Full article ">Figure 8
<p>The <span class="html-italic">i</span>-th limb with a linear actuator that controls the actuated-joint variable θ<sub>i2</sub>.</p>
Full article ">
13 pages, 7275 KiB  
Communication
A Methodology for Flexible Implementation of Collaborative Robots in Smart Manufacturing Systems
by Hermes Giberti, Tommaso Abbattista, Marco Carnevale, Luca Giagu and Fabio Cristini
Robotics 2022, 11(1), 9; https://doi.org/10.3390/robotics11010009 - 4 Jan 2022
Cited by 23 | Viewed by 6472
Abstract
Small-scale production is relying more and more on personalization and flexibility as an innovation key for success in response to market needs such as diversification of consumer preferences and/or greater regulatory pressure. This can be possible thanks to assembly lines dynamically adaptable to [...] Read more.
Small-scale production is relying more and more on personalization and flexibility as an innovation key for success in response to market needs such as diversification of consumer preferences and/or greater regulatory pressure. This can be possible thanks to assembly lines dynamically adaptable to new production requirements, easily reconfigurable and reprogrammable to any change in the production line. In such new automated production lines, where traditional automation is not applicable, human and robot collaboration can be established, giving birth to a kind of industrial craftsmanship. The idea at the base of this work is to take advantage of collaborative robotics by using the robots as other generic industrial tools. To overcome the need of complex programming, identified in the literature as one of the main issues preventing cobot diffusion into industrial environments, the paper proposes an approach for simplifying the programming process while still maintaining high flexibility through a pyramidal parametrized approach exploiting cobot collaborative features. An Interactive Refinement Programming procedure is described and validated through a real test case performed as a pilot in the Building Automation department of ABB in Vittuone (Milan, Italy). The key novel ingredients in this approach are a first translation phase, carried out by engineers of production processes who convert the sequence of assembly operations into a preliminary code built as a sequence of robot operations, followed by an on-line correction carried out by non-expert users who can interact with the machine to define the input parameters to make the robotic code runnable. The users in this second step do not need any competence in programming robotic code. Moreover, from an economic point of view, a standardized way of assessing the convenience of the robotic investment is proposed. Both economic and technical results highlight improvements in comparison to the traditional automation approach, demonstrating the possibility to open new further opportunities for collaborative robots when small/medium batch sizes are involved. Full article
(This article belongs to the Special Issue Kinematics and Robot Design IV, KaRD2021)
Show Figures

Figure 1

Figure 1
<p>Layered pyramidal framework.</p>
Full article ">Figure 2
<p>Structure of a standard and parametric skill.</p>
Full article ">Figure 3
<p>Steps of the Interactive Refinement Procedure (IRP).</p>
Full article ">Figure 4
<p>Implemented Firmware up-date process for security sensor electronic boards. (<b>a</b>) Software development for manager program. (<b>b</b>) View of the two Yumi arms operating at the same time.</p>
Full article ">Figure 5
<p>Comparison of Payback time scenarios.</p>
Full article ">
15 pages, 5736 KiB  
Article
A Reconfiguration Algorithm for the Single-Driven Hexapod-Type Parallel Mechanism
by Alexey Fomin, Anton Antonov and Victor Glazunov
Robotics 2022, 11(1), 8; https://doi.org/10.3390/robotics11010008 - 2 Jan 2022
Viewed by 3022
Abstract
This paper presents a hexapod-type reconfigurable parallel mechanism that operates from a single actuator. The mechanism design allows reproducing diverse output link trajectories without using additional actuators. The paper provides the kinematic analysis where the analytical relationships between the output link coordinates and [...] Read more.
This paper presents a hexapod-type reconfigurable parallel mechanism that operates from a single actuator. The mechanism design allows reproducing diverse output link trajectories without using additional actuators. The paper provides the kinematic analysis where the analytical relationships between the output link coordinates and actuated movement are determined. These relations are used next to develop an original and computationally effective algorithm for the reconfiguration procedure. The algorithm enables selecting mechanism parameters to realize a specific output link trajectory. Several examples demonstrate the implementation of the proposed techniques. CAD simulations on a mechanism virtual prototype verify the correctness of the suggested algorithm. Full article
Show Figures

Figure 1

Figure 1
<p>Virtual prototype of the hexapod-type reconfigurable parallel mechanism with single actuation.</p>
Full article ">Figure 2
<p>Planar kinematic chain placed within the circular guide.</p>
Full article ">Figure 3
<p>Output link trajectory with <math display="inline"><semantics> <mrow> <msub> <mi>X</mi> <mi>g</mi> </msub> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>=</mo> <mi mathvariant="sans-serif">φ</mi> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> according to (29).</p>
Full article ">Figure 4
<p>Output link trajectory with <math display="inline"><semantics> <mrow> <msub> <mi>X</mi> <mi>g</mi> </msub> <msub> <mrow> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>=</mo> <mi>z</mi> </mrow> </mrow> <mi>P</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> according to (31).</p>
Full article ">Figure 5
<p>Output link trajectory with <math display="inline"><semantics> <mrow> <msub> <mi>X</mi> <mi>g</mi> </msub> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>=</mo> <msub> <mi>X</mi> <mi>P</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> according to (33).</p>
Full article ">Figure 6
<p>Angle <span class="html-italic">q</span> calculated for various given output link coordinates <math display="inline"><semantics> <mrow> <msub> <mi>X</mi> <mi>g</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math>: blue line for <math display="inline"><semantics> <mrow> <msub> <mi>X</mi> <mi>g</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>=</mo> <mi mathvariant="sans-serif">φ</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> according to (29); red line for <math display="inline"><semantics> <mrow> <msub> <mi>X</mi> <mi>g</mi> </msub> <msub> <mrow> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>=</mo> <mi>z</mi> </mrow> </mrow> <mi>P</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> according to (31); green line for <math display="inline"><semantics> <mrow> <msub> <mi>X</mi> <mi>g</mi> </msub> <msub> <mrow> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>=</mo> <mi>x</mi> </mrow> </mrow> <mi>P</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> according to (33).</p>
Full article ">Figure 7
<p>Output link trajectory with <math display="inline"><semantics> <mrow> <msub> <mi>X</mi> <mi>g</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>=</mo> <mi mathvariant="sans-serif">φ</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> according to (29) and an updated initial guess (35) for <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="bold-sans-serif">β</mi> <mn>0</mn> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Output link trajectory with <math display="inline"><semantics> <mrow> <msub> <mi>X</mi> <mi>g</mi> </msub> <msub> <mrow> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>=</mo> <mi>z</mi> </mrow> </mrow> <mi>P</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> according to (31) and an updated initial guess (35) for <b>β</b><sup>0</sup>.</p>
Full article ">Figure 9
<p>Output link trajectory with <math display="inline"><semantics> <mrow> <msub> <mi>X</mi> <mi>g</mi> </msub> <msub> <mrow> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>=</mo> <mi>x</mi> </mrow> </mrow> <mi>P</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> according to (33) and an updated initial guess (35) for <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="bold-sans-serif">β</mi> <mn>0</mn> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Angle <span class="html-italic">q</span> calculated for various given output link coordinates <math display="inline"><semantics> <mrow> <msub> <mi>X</mi> <mi>g</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> and an updated initial guess (35) for <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="sans-serif">β</mi> <mn>0</mn> </msup> </mrow> </semantics></math>: blue line for <math display="inline"><semantics> <mrow> <msub> <mi>X</mi> <mi>g</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>=</mo> <mi mathvariant="sans-serif">φ</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> according to (29); red line for <math display="inline"><semantics> <mrow> <msub> <mi>X</mi> <mi>g</mi> </msub> <msub> <mrow> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>=</mo> <mi>z</mi> </mrow> </mrow> <mi>P</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> according to (31); green line for <math display="inline"><semantics> <mrow> <msub> <mi>X</mi> <mi>g</mi> </msub> <msub> <mrow> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>=</mo> <mi>x</mi> </mrow> </mrow> <mi>P</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> according to (33).</p>
Full article ">
22 pages, 9905 KiB  
Article
Faster than Real-Time Surface Pose Estimation with Application to Autonomous Robotic Grasping
by Yannick Roberts, Amirhossein Jabalameli and Aman Behal
Robotics 2022, 11(1), 7; https://doi.org/10.3390/robotics11010007 - 2 Jan 2022
Viewed by 3147
Abstract
Motivated by grasp planning applications within cluttered environments, this paper presents a novel approach to performing real-time surface segmentations of never-before-seen objects scattered across a given scene. This approach utilizes an input 2D depth map, where a first principles-based algorithm is utilized to [...] Read more.
Motivated by grasp planning applications within cluttered environments, this paper presents a novel approach to performing real-time surface segmentations of never-before-seen objects scattered across a given scene. This approach utilizes an input 2D depth map, where a first principles-based algorithm is utilized to exploit the fact that continuous surfaces are bounded by contours of high gradient. From these regions, the associated object surfaces can be isolated and further adapted for grasp planning. This paper also provides details for extracting the six-DOF pose for an isolated surface and presents the case of leveraging such a pose to execute planar grasping to achieve both force and torque closure. As a consequence of the highly parallel software implementation, the algorithm is shown to outperform prior approaches across all notable metrics and is also shown to be invariant to object rotation, scale, orientation relative to other objects, clutter, and varying degree of noise. This allows for a robust set of operations that could be applied to many areas of robotics research. The algorithm is faster than real time in the sense that it is nearly two times faster than the sensor rate of 30 fps. Full article
(This article belongs to the Section Intelligent Robots and Mechatronics)
Show Figures

Figure 1

Figure 1
<p>An illustration showcasing Depth Discontinuity (DD) and Curvature Discontinuity (CD) features that are used to isolate surfaces within a scene.</p>
Full article ">Figure 2
<p>A flow diagram indicating the logic operations performed on a depth map to obtain DD and CD features necessary in generating surface contours.</p>
Full article ">Figure 3
<p>Series of steps relied upon to isolate individual surface segments. (<b>a</b>) Initial Scene (usually depicted by an RGB image). (<b>b</b>) Depth Map in accordance with <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mi>f</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math>. (<b>c</b>) Extracted DD (red) and CD (blue) features as described in section above. (<b>d</b>) Closed contours generated by a union operation of DD and CD maps. (<b>e</b>) Final segmentation process separates the surfaces with respect to their bounding contours. In this case, each contour is represented by a distinct color.</p>
Full article ">Figure 4
<p>Naive surface segmentation approach applied to synthetically generated scene (<b>above</b>) contrasted naive results applied to an unfiltered real-world scene (<b>below</b>). (<b>left</b>) RGB capture of the scene. (<b>center</b>) Depth Image. (<b>right</b>) resulting segmentation.</p>
Full article ">Figure 5
<p>A series of steps needed to robustify a depth frame against noise. (<b>a</b>) RGB Image. (<b>b</b>) Filtering results without any robustification measures. (<b>c</b>) Results with a Wiener filter applied. (<b>d</b>) Results with Wiener Filter and Sobel Operator. (<b>e</b>) Results with Wiener Filtering, Sobel Operator, and Edge Smoothing.</p>
Full article ">Figure 6
<p>NDP filling passes. Left to right: beginning with initial depth image (<b>left</b>), set boundary NDP pixels using the proposed averaging technique until NDP regions are filled (<b>right</b>).</p>
Full article ">Figure 7
<p>An example of a pipeline containing multiple filtering stages.</p>
Full article ">Figure 8
<p>Overall program architecture use to perform surface segmentation and extract 6-DOF surface pose.</p>
Full article ">Figure 9
<p>Set of structuring elements used as Pruning templates to decide whether a pixel should be removed [<a href="#B25-robotics-11-00007" class="html-bibr">25</a>].</p>
Full article ">Figure 10
<p>Illustration of 6-DOF Surface pose estimation. (<b>Left</b>) Scene. (<b>Center</b>) Contours. (<b>Right</b>) Point Cloud showcasing various surface poses.</p>
Full article ">Figure 11
<p>The subset of images presented by [<a href="#B19-robotics-11-00007" class="html-bibr">19</a>] during their simulation illustration.</p>
Full article ">Figure 12
<p><b>Left</b>: Number of Surfaces in a scene versus execution time. Execution time is invariant to scene complexity. <b>Right</b>: Accuracy versus number of surfaces within a scene.</p>
Full article ">Figure 13
<p>Qualitative surface-segmentation results of cluttered scenes from the OSD dataset. The results presents the case that our proposed algorithm is able to capture surfaces across cluttered scenes.</p>
Full article ">Figure 14
<p>Qualitative surface-segmentation results of scenes streamed form Microsoft’s Acure Kinect Camera.</p>
Full article ">
34 pages, 4907 KiB  
Article
A Screw Theory Approach to Computing the Instantaneous Rotation Centers of Indeterminate Planar Linkages
by Juan Ignacio Valderrama-Rodríguez, José M. Rico, J. Jesús Cervantes-Sánchez and Ricardo García-García
Robotics 2022, 11(1), 6; https://doi.org/10.3390/robotics11010006 - 31 Dec 2021
Cited by 2 | Viewed by 3676
Abstract
This paper presents a screw theory approach for the computation of the instantaneous rotation centers of indeterminate planar linkages. Since the end of the 19th century, the determination of the instantaneous rotation, or velocity centers of planar mechanisms has been an important topic [...] Read more.
This paper presents a screw theory approach for the computation of the instantaneous rotation centers of indeterminate planar linkages. Since the end of the 19th century, the determination of the instantaneous rotation, or velocity centers of planar mechanisms has been an important topic in kinematics that has led to the well-known Aronhold–Kennedy theorem. At the beginning of the 20th century, it was found that there were planar mechanisms for which the application of the Aronhold–Kennedy theorem was unable to find all the instantaneous rotation centers (IRCs). These mechanisms were denominated complex or indeterminate. The beginning of this century saw a renewed interest in complex or indeterminate planar mechanisms. In this contribution, a new and simpler screw theory approach for the determination of indeterminate rotation centers of planar linkages is presented. The new approach provides a simpler method for setting up the equations. Furthermore, the algebraic equations to be solved are simpler than the ones published to date. The method is based on the systematic application of screw theory, isomorphic to the Lie algebra, se(3), of the Euclidean group, SE(3), and the invariant symmetric bilinear forms defined on se(3). Full article
(This article belongs to the Special Issue Kinematics and Robot Design IV, KaRD2021)
Show Figures

Figure 1

Figure 1
<p>Three rigid bodies and the related points and vectors.</p>
Full article ">Figure 2
<p>A, eight-bar “single flyer” undetermined planar linkage.</p>
Full article ">Figure 3
<p>Flow diagram of the proposed algorithm.</p>
Full article ">Figure 4
<p>The subgraph of secondary rotation centers required to solve the “single-flyer” indeterminate planar linkage.</p>
Full article ">Figure 5
<p>All rotation centers of a “single flyer” planar eight-bar linkage.</p>
Full article ">Figure 6
<p>Eight-bar “double-butterfly” undetermined planar linkage.</p>
Full article ">Figure 7
<p>All rotation centers of an eight-bar “double-butterfly” planar linkage.</p>
Full article ">Figure 8
<p>Subgraphs of secondary rotation centers required to solve the “double-butterfly” indeterminate planar linkage.</p>
Full article ">Figure A1
<p>A first configuration of the three centers.</p>
Full article ">Figure A2
<p>A second configuration of the three centers.</p>
Full article ">
38 pages, 15554 KiB  
Article
Development and Usability Testing of a Finger Grip Enhancer for the Elderly
by Dominic Wen How Tan, Poh Kiat Ng, Ervina Efzan Mhd Noor, Adi Saptari, Chee Chen Hue and Yu Jin Ng
Robotics 2022, 11(1), 5; https://doi.org/10.3390/robotics11010005 - 30 Dec 2021
Cited by 8 | Viewed by 5236
Abstract
As people age, their finger function deteriorates due to muscle, nerve, and brain degeneration. While exercises might delay this deterioration, an invention that enhances elderly people’s pinching abilities is essential. This study aims to design and develop a finger grip enhancer that facilitates [...] Read more.
As people age, their finger function deteriorates due to muscle, nerve, and brain degeneration. While exercises might delay this deterioration, an invention that enhances elderly people’s pinching abilities is essential. This study aims to design and develop a finger grip enhancer that facilitates the day-to-day pinching activities of elderly people. This research is an extension of a previous study that conceptualised a finger grip enhancer. The device facilitates finger flexion on the thumb and index finger, and weighs 520 g, allowing for improved portability and sufficient force exertion (13.9 N) for day-to-day pinching. To test for usability, eleven subjects aged 65 years and above performed a pinch-lift-hold test on various household objects. The pinch force before and after utilising the device was measured. Using Minitab 18, the statistical significance of using this device was analysed with a paired-samples t-test. With this device, the elderly people’s pinching abilities significantly improved in both pinch force and pinch force steadiness (p < 0.05). The proposed device has the potential to enhance elderly people’s quality of life by supporting a firm pinch in the handling of everyday objects. This research has applicational value in developing exoskeleton devices for patients who require rehabilitation. Full article
(This article belongs to the Special Issue Kinematics and Robot Design IV, KaRD2021)
Show Figures

Figure 1

Figure 1
<p>Linkage and joint mechanism of a human finger.</p>
Full article ">Figure 2
<p>Pinch enhancer tree diagram.</p>
Full article ">Figure 3
<p>Finger grip enhancer divided into subassemblies (control system, palm glove, and actuation).</p>
Full article ">Figure 4
<p>Mechanical design of the phalange transmission. Note: MCP—metacarpophalangeal; PIP—proximal interphalangeal; DIP—distal interphalangeal.</p>
Full article ">Figure 5
<p>Sketched design of device prototype with cable travel.</p>
Full article ">Figure 6
<p>Control flow of the device.</p>
Full article ">Figure 7
<p>Electrical circuit design of the device prototype.</p>
Full article ">Figure 8
<p>Framework of design analyses, test plans and proof of concept for the finger grip enhancer.</p>
Full article ">Figure 9
<p>Initial and final design after Finite Element Analysis (FEA).</p>
Full article ">Figure 10
<p>External compressive pinch force applied on component.</p>
Full article ">Figure 11
<p>Force applied during FEA analysis for the second condition.</p>
Full article ">Figure 12
<p>Drawing comparison of finger ring component before and after FEA.</p>
Full article ">Figure 13
<p>Test items for pinch force measurements (detergent cup, clothes peg, golf ball, insect repellent bottle, power plug, and remote control).</p>
Full article ">Figure 14
<p>Force calibration of Tekscan<sup>®</sup> Flexiforce sensor.</p>
Full article ">Figure 15
<p>Sitting posture of participants during force measurements.</p>
Full article ">Figure 16
<p>Starting position of participants during pinch force test.</p>
Full article ">Figure 17
<p>Pinch assistant prototype.</p>
Full article ">Figure 18
<p>Comparison of force during pinch action for each test object.</p>
Full article ">Figure 19
<p>Pinch force performance of an elderly person during the clothes peg pinch test before and after using the prototype.</p>
Full article ">Figure 20
<p>Pinch force fluctuation for each object test with and without the assistance device.</p>
Full article ">Figure 21
<p>Pinch force vs. weight graph of different hand exoskeleton.</p>
Full article ">
21 pages, 3595 KiB  
Article
Mechatronic Model of a Compliant 3PRS Parallel Manipulator
by Antonio Ruiz, Francisco J. Campa, Oscar Altuzarra, Saioa Herrero and Mikel Diez
Robotics 2022, 11(1), 4; https://doi.org/10.3390/robotics11010004 - 28 Dec 2021
Cited by 5 | Viewed by 3401
Abstract
Compliant mechanisms are widely used for instrumentation and measuring devices for their precision and high bandwidth. In this paper, the mechatronic model of a compliant 3PRS parallel manipulator is developed, integrating the inverse and direct kinematics, the inverse dynamic problem of the manipulator [...] Read more.
Compliant mechanisms are widely used for instrumentation and measuring devices for their precision and high bandwidth. In this paper, the mechatronic model of a compliant 3PRS parallel manipulator is developed, integrating the inverse and direct kinematics, the inverse dynamic problem of the manipulator and the dynamics of the actuators and the control. The kinematic problem is solved, assuming a pseudo-rigid model for the deflection in the compliant revolute and spherical joints. The inverse dynamic problem is solved, using the Principle of Energy Equivalence. The mechatronic model allows the prediction of the bandwidth of the manipulator motion in the 3 degrees of freedom for a given control and set of actuators, helping in the design of the optimum solution. A prototype is built and validated, comparing experimental signals with the ones from the model. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>Top</b>) 3PRS compliant parallel manipulator developed. (<b>Bottom</b>) Revolute and spherical joints used.</p>
Full article ">Figure 2
<p>FEM simulations: (<b>Left</b>) Maximum position in Z direction. (<b>Right</b>) Maximum tilting around Y axis.</p>
Full article ">Figure 3
<p>Decoupling of the compliant mechanism and actuators.</p>
Full article ">Figure 4
<p>Mechatronic model of the compliant 3PRS.</p>
Full article ">Figure 5
<p>Schematic diagram of the 3PRS kinematics.</p>
Full article ">Figure 6
<p>Modeling of the control and actuators dynamics.</p>
Full article ">Figure 7
<p>Experimental setup.</p>
Full article ">Figure 8
<p>Experimental vs. simulated signals: (<b>Left</b>) Angular position of the motors. (<b>Right</b>) Motor torque.</p>
Full article ">Figure 9
<p>Experimental vs. simulated signals: (<b>Left</b>) Angular position of the motors. (<b>Right</b>) Motor torque.</p>
Full article ">Figure A1
<p>Reference systems in the spherical joints.</p>
Full article ">
22 pages, 6035 KiB  
Article
Gait Transition from Pacing by a Quadrupedal Simulated Model and Robot with Phase Modulation by Vestibular Feedback
by Takahiro Fukui, Souichiro Matsukawa, Yasushi Habu and Yasuhiro Fukuoka
Robotics 2022, 11(1), 3; https://doi.org/10.3390/robotics11010003 - 25 Dec 2021
Cited by 7 | Viewed by 4180
Abstract
We propose a method to achieve autonomous gait transition according to speed for a quadruped robot pacing at medium speeds. We verified its effectiveness through experiments with the simulation model and the robot we developed. In our proposed method, a central pattern generator [...] Read more.
We propose a method to achieve autonomous gait transition according to speed for a quadruped robot pacing at medium speeds. We verified its effectiveness through experiments with the simulation model and the robot we developed. In our proposed method, a central pattern generator (CPG) is applied to each leg. Each leg is controlled by a PD controller based on output from the CPG. The four CPGs are coupled, and a hard-wired CPG network generates a pace pattern by default. In addition, we feed the body tilt back to the CPGs in order to adapt to the body oscillation that changes according to the speed. As a result, our model and robot achieve stable changes in speed while autonomously generating a walk at low speeds and a rotary gallop at high speeds, despite the fact that the walk and rotary gallop are not preprogramed. The body tilt angle feedback is the only factor involved in the autonomous generation of gaits, so it can be easily used for various quadruped robots. Therefore, it is expected that the proposed method will be an effective control method for quadruped robots. Full article
(This article belongs to the Special Issue Mechatronics Systems and Robots)
Show Figures

Figure 1

Figure 1
<p>Robot (<b>a</b>) and simulated (<b>b</b>) model of Spinalbot.</p>
Full article ">Figure 2
<p>Neural configuration.</p>
Full article ">Figure 3
<p>Diagram of leg motion and body tilt.</p>
Full article ">Figure 4
<p>Diagram of the linear join: (<b>a</b>) a general diagram, (<b>b</b>) a state in the swing phase, (<b>c</b>) a state in the stance phase, and (<b>d</b>) a setting to measure the stiffness for <a href="#robotics-11-00003-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 5
<p>Stiffness of the linear joint in the stance phase when <math display="inline"><semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>54</mn> </mrow> </semantics></math> ms.</p>
Full article ">Figure 6
<p>Values of the speed parameters for the simulated model (<b>a</b>) at low to medium speeds and (<b>b</b>) at medium to high speeds. The parameters are colored as follows: <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>r</mi> </msub> </mrow> </semantics></math>: black; <math display="inline"><semantics> <mrow> <msubsup> <mi>K</mi> <mrow> <mi>p</mi> <mi>i</mi> <mo> </mo> <mi>s</mi> <mi>t</mi> </mrow> <mi>F</mi> </msubsup> </mrow> </semantics></math>: blue; <math display="inline"><semantics> <mrow> <msubsup> <mi>K</mi> <mrow> <mi>p</mi> <mi>i</mi> <mo> </mo> <mi>s</mi> <mi>w</mi> </mrow> <mi>τ</mi> </msubsup> </mrow> </semantics></math>: orange; <math display="inline"><semantics> <mrow> <msubsup> <mi>K</mi> <mrow> <mi>p</mi> <mi>i</mi> <mo> </mo> <mi>s</mi> <mi>t</mi> </mrow> <mi>τ</mi> </msubsup> </mrow> </semantics></math>: green; <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mrow> <mi>d</mi> <mi>i</mi> <mo> </mo> <mi>s</mi> <mi>w</mi> </mrow> </msub> </mrow> </semantics></math>: purple; and <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mrow> <mi>d</mi> <mi>i</mi> <mo> </mo> <mi>s</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math>: magenta.</p>
Full article ">Figure 7
<p>Values of the speed parameters for the robot (<b>a</b>) at low to medium speeds and (<b>b</b>) at medium to high speeds. The parameters are colored as follows: <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>r</mi> </msub> </mrow> </semantics></math>: black; <math display="inline"><semantics> <mi>T</mi> </semantics></math>: blue; <math display="inline"><semantics> <mrow> <msubsup> <mi>K</mi> <mrow> <mi>p</mi> <mi>i</mi> <mo> </mo> <mi>s</mi> <mi>w</mi> </mrow> <mi>τ</mi> </msubsup> </mrow> </semantics></math>: orange; <math display="inline"><semantics> <mrow> <msubsup> <mi>K</mi> <mrow> <mi>p</mi> <mi>i</mi> <mo> </mo> <mi>s</mi> <mi>t</mi> </mrow> <mi>τ</mi> </msubsup> </mrow> </semantics></math>: green; <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mrow> <mi>d</mi> <mi>i</mi> <mo> </mo> <mi>s</mi> <mi>w</mi> </mrow> </msub> </mrow> </semantics></math>: purple; and <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mrow> <mi>d</mi> <mi>i</mi> <mo> </mo> <mi>s</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math>: magenta.</p>
Full article ">Figure 8
<p>Speeds during gait transition from walking to pacing in the simulation.</p>
Full article ">Figure 9
<p>Simulated results for walking and pacing at low and medium speeds, respectively: (<b>a</b>) a walk at 0.2 m/s and (<b>b</b>) a pace at 0.5 m/s. LF, LH, RF, and RH represent left foreleg, left hind leg, right foreleg, and right hind leg, respectively. CPG outputs are colored as follows: LF: blue; LH: red; RF: green; and RH: purple. The footfalls are indicated by thick line segments. The black wavy line represents body tilt (positive when facing down).</p>
Full article ">Figure 10
<p>Speeds during gait transition from pacing to galloping in the simulation.</p>
Full article ">Figure 11
<p>Simulated results for pacing and galloping at medium and high speeds, respectively: (<b>a</b>) a pace at 0.9 m/s and (<b>b</b>) a rotary gallop at 1.6 m/s. LF, LH, RF, and RH represent left foreleg, left hind leg, right foreleg, and right hind leg, respectively. CPG outputs are colored as follows: LF: blue; LH: red; RF: green; and RH: purple. The footfalls are indicated by thick line segments. The black wavy line represents body tilt (positive when facing down).</p>
Full article ">Figure 12
<p>Speeds during gait transition from walking to pacing.</p>
Full article ">Figure 13
<p>Experimental results for walking and pacing at low and medium speeds, respectively: (<b>a</b>) a walk at 0.17 m/s and (<b>b</b>) a pace at 0.29 m/s. LF, LH, RF, and RH represent left foreleg, left hind leg, right foreleg, and right hind leg, respectively. CPG outputs are colored as follows: LF: blue; LH: red; RF: green; and RH: purple. The footfalls are indicated by thick line segments. The black wavy line represents body tilt (positive when facing down).</p>
Full article ">Figure 14
<p>Speeds during gait transition from pacing to galloping.</p>
Full article ">Figure 15
<p>Experimental results for pacing and galloping at medium and high speeds, respectively: (<b>a</b>) a pace at 0.5 m/s and (<b>b</b>) a rotary gallop at 1.0 m/s. LF, LH, RF, and RH represent left foreleg, left hind leg, right foreleg, and right hind leg, respectively. CPG outputs are colored as follows: LF: blue; LH: red; RF: green; and RH: purple. The footfalls are indicated by thick line segments. The black wavy line represents body tilt (positive when facing down).</p>
Full article ">Figure 16
<p>Relationship between the speed parameter and maximum speed. The plot shows the maximum speed at each proportional gain. The plot colors are as follows: with vestibular modulation: orange; without vestibular modulation: blue.</p>
Full article ">Figure 17
<p>Relationship between the vestibular modulation level and phase difference. The plot shows the phase difference at each vestibular modulation level <math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mn>2</mn> </msub> </mrow> </semantics></math>. The red plot shows the phase difference between the left foreleg and the right foreleg (LF−RF). The blue plot shows the phase difference between the left foreleg and the left hind leg (LF−LH).</p>
Full article ">Figure 18
<p>Experimental results for (<b>a</b>) a pace at a speed of 0.1 m/s and (<b>b</b>) a walk at a speed of 0.17 m/s. CPG outputs are colored as follows: LF: blue; LH: red; RF: green; and RH: purple. The footfalls are indicated by the thick line segments. The black trajectory represents the body tilt. A, A’, B, and B’ indicate periods while tilting forward. C and D indicate that the extensor neurons are activated. E and F indicate that the flexor neurons are activated.</p>
Full article ">Figure 19
<p>Experimental results for (<b>a</b>) a pace at a speed of 0.6 m/s and (<b>b</b>) a rotary gallop at a speed of 0.7 m/s. CPG outputs are colored as follows: LF: blue; LH: red; RF: green; and RH: purple. The footfalls are indicated by the thick line segments. The black trajectory represents the body tilt. G and G’ indicate periods while tilting backward. H and H’ indicate periods while tilting forward. I and J indicate that the extensor neurons are activated. K and L indicate that the flexor neurons are activated.</p>
Full article ">Figure 20
<p>The principle of gait generation and transition: (<b>I</b>) walk, (<b>II</b>) pace, (<b>III</b>) rotary gallop. The order of switching from the stance phase to swing phase in the lateral pair is listed below the robot diagram.</p>
Full article ">
14 pages, 82513 KiB  
Article
A Novel 3D Ring-Based Flapper Valve for Soft Robotic Applications
by Kelly Low, Devin R. Berg and Perry Y. Li
Robotics 2022, 11(1), 2; https://doi.org/10.3390/robotics11010002 - 22 Dec 2021
Cited by 1 | Viewed by 4119
Abstract
In this paper, the design and testing of a novel valve for the intuitive spatial control of soft or continuum manipulators are presented. The design of the valve is based on the style of a hydraulic flapper valve, but with simultaneous control of [...] Read more.
In this paper, the design and testing of a novel valve for the intuitive spatial control of soft or continuum manipulators are presented. The design of the valve is based on the style of a hydraulic flapper valve, but with simultaneous control of three pressure feed points, which can be used to drive three antagonistically arranged hydraulic actuators for positioning soft robots. The variable control orifices are arranged in a rotationally symmetric radial pattern to allow for an inline mounting configuration of the valve within the body of a manipulator. Positioning the valve ring at various 3D configurations results in different pressurizations of the actuators and corresponding spatial configurations of the manipulator. The design of the valve is suitable for miniaturization and use in applications with size constraints such as small soft manipulators and surgical robotics. Experimental validation showed that the performance of the valve can be reasonably modeled and can effectively drive an antagonistic arrangement of three actuators for soft manipulator control. Full article
(This article belongs to the Section Soft Robotics)
Show Figures

Figure 1

Figure 1
<p>Hydraulic circuit concept for (<b>A</b>) control of a single actuator and (<b>B</b>) of the three-actuator ring valve. (<b>C</b>) Relationship between actuated pressure ratio and area ratio assuming Equation (<a href="#FD3-robotics-11-00002" class="html-disp-formula">3</a>).</p>
Full article ">Figure 2
<p>(<b>A</b>) Model of ring based flapper valve design and (<b>B</b>) model showing part of the flow path for both the closed (<b>top</b>) and open (<b>bottom</b>) variable orifice conditions. In (<b>B</b>), the ring has been omitted for clarity of presentation. (<b>C</b>) <math display="inline"><semantics> <msub> <mi>D</mi> <mi>r</mi> </msub> </semantics></math> is the inner diameter of the ring that is manipulated based on axial position, <span class="html-italic">z</span>. The center of the ring is offset from the center of the valve body. <math display="inline"><semantics> <mi>α</mi> </semantics></math> describe the angle of the offset relative to a reference line on the hub. <math display="inline"><semantics> <msub> <mi>R</mi> <mi>o</mi> </msub> </semantics></math> is the radial offset which is the distance between the center of the valve <math display="inline"><semantics> <msub> <mi>C</mi> <mi>v</mi> </msub> </semantics></math> and the center of the ring, <math display="inline"><semantics> <msub> <mi>C</mi> <mi>r</mi> </msub> </semantics></math>. <math display="inline"><semantics> <mrow> <mi>O</mi> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>O</mi> <mn>2</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>O</mi> <mn>3</mn> </mrow> </semantics></math> are the orifices on the valve body. The gap distances, <math display="inline"><semantics> <msub> <mi>h</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>h</mi> <mn>2</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>h</mi> <mn>3</mn> </msub> </semantics></math> are distances between points <math display="inline"><semantics> <mrow> <mi>O</mi> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>O</mi> <msup> <mn>1</mn> <mo>′</mo> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>O</mi> <mn>2</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>O</mi> <msup> <mn>2</mn> <mo>′</mo> </msup> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>O</mi> <mn>3</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>O</mi> <msup> <mn>3</mn> <mo>′</mo> </msup> </mrow> </semantics></math>. The figure is not to scale and is for illustrative purposes only.</p>
Full article ">Figure 3
<p>Examples of two ring positions at its maximum axial position with (<b>A</b>) normalized radial offset <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>R</mi> <mo>¯</mo> </mover> <mi>o</mi> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <msup> <mn>120</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>; (<b>B</b>) normalized radial offset <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>R</mi> <mo>¯</mo> </mover> <mi>o</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <msup> <mn>240</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>. Predicted pressure ratios when <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> (min) and (<b>C</b>) <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>R</mi> <mo>¯</mo> </mover> <mi>o</mi> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, (<b>D</b>) <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>R</mi> <mo>¯</mo> </mover> <mi>o</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>. Predicted pressure ratios when <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>8</mn> <mi>m</mi> <mi>m</mi> </mrow> </semantics></math> (max) and (<b>E</b>) <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>R</mi> <mo>¯</mo> </mover> <mi>o</mi> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, (<b>F</b>) <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>R</mi> <mo>¯</mo> </mover> <mi>o</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Effect of the valve positions on a continuum manipulator. (<b>A</b>): Varying <math display="inline"><semantics> <mi>α</mi> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> mm, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>R</mi> <mo>¯</mo> </mover> <mi>o</mi> </msub> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics></math>; (<b>B</b>): Varying <math display="inline"><semantics> <msub> <mover accent="true"> <mi>R</mi> <mo>¯</mo> </mover> <mi>o</mi> </msub> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> mm and <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <msup> <mn>180</mn> <mi>o</mi> </msup> </mrow> </semantics></math>; (<b>C</b>): Varying <span class="html-italic">z</span>, <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <msup> <mn>60</mn> <mi>o</mi> </msup> </mrow> </semantics></math>, and the offsets <math display="inline"><semantics> <msub> <mi>R</mi> <mi>o</mi> </msub> </semantics></math> all 3 <math display="inline"><semantics> <msup> <mi>z</mi> <mo>′</mo> </msup> </semantics></math> are the same as for <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>R</mi> <mo>¯</mo> </mover> <mi>o</mi> </msub> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics></math> at <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> mm.</p>
Full article ">Figure 5
<p>(<b>A</b>) Machined prototype of the ring-based control valve design. (<b>B</b>) Example cross-section of a multi-lumen tubing used to provide both feed and return lines for the control valve. (<b>C</b>) Illustration of valve connection to the hydraulic supply and return lumens of the described tubing for the ring-based valve design.</p>
Full article ">Figure 6
<p>Experimental rig in which each orifice can be individually opened or closed using a servo motor driving a screw.</p>
Full article ">Figure 7
<p>(<b>A</b>) Actuator to input pressure ratio versus gap distance for each orifice. (<b>B</b>) Calibrated area ratios as a function of gap distance. (<b>C</b>) Relationship between the angle of the servo motor simulated ring and pressure ratio at axial position of 4 mm (mid-position of the ring) and radial offset of 0.5. Marker ‘o’ represents the experimental results taken at different angles of the ring. Lines represent the orifice equation (Equation (<a href="#FD1-robotics-11-00002" class="html-disp-formula">1</a>)) relationship at axial position of 4 mm and radial offset of 0.5. (<b>D</b>) Relationship between the angle of the servo motor simulated ring and pressure ratio at axial position of 4 mm (mid-position of the ring) and radial offset of 1. (<b>E</b>) Relationship between the angle of the actual ring and pressure ratio at axial position of 4 mm (mid-position of the ring) and radial offset of 1.</p>
Full article ">Figure 8
<p>Soft actuator based on the Multi-Module Variable Stiffness Manipulator and used for experimental validation of the valve.</p>
Full article ">Figure 9
<p>Manipulation of the soft-robot by positioning the ring at different positions and angles, resulting in the soft-robot being drive to various corresponding positions.</p>
Full article ">
21 pages, 6205 KiB  
Article
A Robot Arm Design Optimization Method by Using a Kinematic Redundancy Resolution Technique
by Omar W. Maaroof, Mehmet İsmet Can Dede and Levent Aydin
Robotics 2022, 11(1), 1; https://doi.org/10.3390/robotics11010001 - 22 Dec 2021
Cited by 8 | Viewed by 6105
Abstract
Redundancy resolution techniques have been widely used for the control of kinematically redundant robots. In this work, one of the redundancy resolution techniques is employed in the mechanical design optimization of a robot arm. Although the robot arm is non-redundant, the proposed method [...] Read more.
Redundancy resolution techniques have been widely used for the control of kinematically redundant robots. In this work, one of the redundancy resolution techniques is employed in the mechanical design optimization of a robot arm. Although the robot arm is non-redundant, the proposed method modifies robot arm kinematics by adding virtual joints to make the robot arm kinematically redundant. In the proposed method, a suitable objective function is selected to optimize the robot arm’s kinematic parameters by enhancing one or more performance indices. Then the robot arm’s end-effector is fixed at critical positions while the redundancy resolution algorithm moves its joints including the virtual joints because of the self-motion of a redundant robot. Hence, the optimum values of the virtual joints are determined, and the design of the robot arm is modified accordingly. An advantage of this method is the visualization of the changes in the manipulator’s structure during the optimization process. In this work, as a case study, a passive robotic arm that is used in a surgical robot system is considered and the task is defined as the determination of the optimum base location and the first link’s length. The results indicate the effectiveness of the proposed method. Full article
Show Figures

Figure 1

Figure 1
<p>The surgical robotic system for minimal invasive pituitary tumor surgery: NeuRoboScope.</p>
Full article ">Figure 2
<p>Possible design parameters defined with respect to a modified DH convention.</p>
Full article ">Figure 3
<p>Kinematic scheme of the passive robot arm.</p>
Full article ">Figure 4
<p>The surgery room setting with the monitor, the NeuRoboScope system and the surgeons.</p>
Full article ">Figure 5
<p>Coordinate system fixed on the surgery table (where the unit vector along the <span class="html-italic">x</span>-axis is <math display="inline"><semantics> <mrow> <msubsup> <mover accent="true"> <mi>u</mi> <mo stretchy="false">→</mo> </mover> <mn>1</mn> <mrow> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> </mrow> </msubsup> </mrow> </semantics></math> and the unit vector along the <span class="html-italic">y</span>-axis is <math display="inline"><semantics> <mrow> <msubsup> <mover accent="true"> <mi>u</mi> <mo stretchy="false">→</mo> </mover> <mn>2</mn> <mrow> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> </mrow> </msubsup> </mrow> </semantics></math> ).</p>
Full article ">Figure 6
<p>Change of the robot arm structure and the manipulability ellipse (printed in blue color) during the optimization routine by using the modified condition number.</p>
Full article ">Figure 7
<p>Variation of the manipulability index and singular values during the optimization routine by using the modified condition number.</p>
Full article ">Figure 8
<p>The optimization procedure with the modified condition number: (<b>a</b>) Variation of the generalized inertia matrix, (<b>b</b>) variation of components of the objective function, (<b>c</b>) variation in the values of design parameters.</p>
Full article ">Figure 9
<p>Variation of the robot arm structure and the manipulability ellipse (printed in blue color) during the optimization routine by using the modified condition number and the generalized inertia matrix.</p>
Full article ">Figure 10
<p>Variation of the manipulability index and singular values during the optimization process by using the modified condition number and the inertia matrix.</p>
Full article ">Figure 11
<p>The optimization procedure with modified condition number and inertia matrix: (<b>a</b>) Variation of the generalized inertia matrix, (<b>b</b>) variation of components of the objective function, (<b>c</b>) variation in the values of design parameters.</p>
Full article ">Figure 12
<p>Pareto set and the initiation-termination points of the optimization procedure for test B for minimizing the modified condition number, <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi>n</mi> </msub> </mrow> </semantics></math>, and the determinant of the generalized inertia matrix, <math display="inline"><semantics> <mrow> <mi>I</mi> <mi>m</mi> <mi>N</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 13
<p>Flowchart for the implementation of the new design optimization designated for robot manipulators.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop