Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 12, June
Previous Issue
Volume 12, February
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

Robotics, Volume 12, Issue 2 (April 2023) – 33 articles

Cover Story (view full-size image): The trust in human–robot partnerships is a critical aspect of any shared-task performance for both the human and the robot. This article proposes a novel trust-assist framework for human–robot collaborative tasks. The developed framework allows the robot to determine a trust level in its human partner. The calculations of this trust level are based on human motions, past interactions of the human–robot pair, and the human’s current performance in the task. The trust level between the human and the robot is evaluated dynamically throughout the collaborative task, and this allows the trust to change if the human performs false positive actions, which can help the robot to avoid making unpredictable movements and causing injury to the human. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
22 pages, 602 KiB  
Article
CSP2Turtle: Verified Turtle Robot Plans
by Dara MacConville, Marie Farrell, Matt Luckcuck and Rosemary Monahan
Robotics 2023, 12(2), 62; https://doi.org/10.3390/robotics12020062 - 21 Apr 2023
Viewed by 2596
Abstract
Software verification is an important approach to establishing the reliability of critical systems. One important area of application is in the field of robotics, as robots take on more tasks in both day-to-day areas and highly specialised domains. Our particular interest is in [...] Read more.
Software verification is an important approach to establishing the reliability of critical systems. One important area of application is in the field of robotics, as robots take on more tasks in both day-to-day areas and highly specialised domains. Our particular interest is in checking the plans that robots are expected to follow to detect errors that would lead to unreliable behaviour. Python is a popular programming language in the robotics domain through the use of the Robot Operating System (ROS) and various other libraries. Python’s Turtle package provides a mobile agent, which we formally model here using Communicating Sequential Processes (CSP). Our interactive toolchain CSP2Turtle with CSP models and Python components enables plans for the turtle agent to be verified using the FDR model-checker before being executed in Python. This means that certain classes of errors can be avoided, providing a starting point for more detailed verification of Turtle programs and more complex robotic systems. We illustrate our approach with examples of robot navigation and obstacle avoidance in a 2D grid-world. We evaluate our approach and discuss future work, including how our approach could be scaled to larger systems. Full article
(This article belongs to the Special Issue Agents and Robots for Reliable Engineered Autonomy 2023)
Show Figures

Figure 1

Figure 1
<p>Turtle (▸) output on the left after following the sequence of instructions on the right.</p>
Full article ">Figure 2
<p>Turtle (▸) after drawing a Hilbert curve using code from CPython’s <tt>Turtle</tt> demo files. This curve is the result of the call <tt>hilbert</tt>(<tt>turtle, 8, 3, 1</tt>).</p>
Full article ">Figure 3
<p>Architecture of the CSP model. Each box represents a named process. The solid line represents <tt>Turtle_main</tt> starting the other processes, which are interleaved <math display="inline"><semantics> <mrow> <mo>(</mo> <mo>⫴</mo> <mo>)</mo> </mrow> </semantics></math>. The dashed arrows (⤏) indicate events. <tt>Turtle_nav</tt> responds to the navigation events with a process for each direction, which is omitted for brevity. These processes then return to <tt>Turtle_nav</tt>, hence the self loop shown. <tt>pu</tt> and <tt>pd</tt> are the events corresponding to the <tt>penup()</tt> and <tt>pendown()</tt> commands.</p>
Full article ">Figure 4
<p>All the stages and components of our toolchain, showing the flow and order of operations and how the user interacts with it. The arrows indicate how components act on or produce others.</p>
Full article ">Figure 5
<p>A usage example of CSP2Turtle’s Interactive Mode, where all possible paths lead to the goal and CSP2Turtle accepts the plan.</p>
Full article ">Figure 6
<p>A usage example of CSP2Turtle’s Interactive Mode, where it is possible that paths lead the turtle outside of the map. The dashed line represents this path. Hence, CSP2Turtle rejects the plan.</p>
Full article ">Figure 7
<p>An example of CSP2Turtle being used in file input mode with file <tt>input.txt</tt>, whose contents are also listed here. CSP2Turtle reports that the assertion fails and provides a possible successful path to the goal.</p>
Full article ">Figure 8
<p>The turtle cannot reach the goal via the given path, and FDR reports that there is no possible path that would work.</p>
Full article ">Figure 9
<p>Three potential problem maps to support our educational approach. From left to right: (<b>a</b>) A simple map that can be solved with the plan <math display="inline"><semantics> <mrow> <mi>f</mi> <mi>d</mi> <mo>→</mo> <mi>f</mi> <mi>d</mi> </mrow> </semantics></math>, (<b>b</b>) a map that can be solved with the plan <math display="inline"><semantics> <mrow> <mi>f</mi> <mi>d</mi> <mo>→</mo> <mi>l</mi> <mi>t</mi> <mo>→</mo> <mi>f</mi> <mi>d</mi> </mrow> </semantics></math>, and (<b>c</b>) a map that needs external choice for both routes.</p>
Full article ">
22 pages, 3375 KiB  
Article
An Incremental Inverse Reinforcement Learning Approach for Motion Planning with Separated Path and Velocity Preferences
by Armin Avaei, Linda van der Spaa, Luka Peternel and Jens Kober
Robotics 2023, 12(2), 61; https://doi.org/10.3390/robotics12020061 - 20 Apr 2023
Cited by 8 | Viewed by 2766
Abstract
Humans often demonstrate diverse behaviors due to their personal preferences, for instance, related to their individual execution style or personal margin for safety. In this paper, we consider the problem of integrating both path and velocity preferences into trajectory planning for robotic manipulators. [...] Read more.
Humans often demonstrate diverse behaviors due to their personal preferences, for instance, related to their individual execution style or personal margin for safety. In this paper, we consider the problem of integrating both path and velocity preferences into trajectory planning for robotic manipulators. We first learn reward functions that represent the user path and velocity preferences from kinesthetic demonstration. We then optimize the trajectory in two steps, first the path and then the velocity, to produce trajectories that adhere to both task requirements and user preferences. We design a set of parameterized features that capture the fundamental preferences in a pick-and-place type of object transportation task, both in the shape and timing of the motion. We demonstrate that our method is capable of generalizing such preferences to new scenarios. We implement our algorithm on a Franka Emika 7-DoF robot arm and validate the functionality and flexibility of our approach in a user study. The results show that non-expert users are able to teach the robot their preferences with just a few iterations of feedback. Full article
(This article belongs to the Topic Intelligent Systems and Robotics)
Show Figures

Figure 1

Figure 1
<p>Leveraging demonstrations as means of understanding the human’s preferences in an object carrying task: The robot originally plans the blue trajectory without knowledge of human preferences. The user demonstrates the orange trajectory which in this instance contains the following preferences: “stay close to the table surface”, “maintain a larger distance from the obstacle” and “pass on the far side of the obstacle”. We develop a method for learning and generalizing such preferences to new scenarios (i.e., new start, goal or obstacle positions).</p>
Full article ">Figure 2
<p>The human user provides demonstrations, which are used to learn a distribution over reward function via coactive learning. We use the learned rewards to optimize the robot’s trajectory according to human preferences. The resulting trajectory is executed using an impedance controller. We repeat this process, querying the human for preferred trajectories until convergence. The human can then be taken out of the loop.</p>
Full article ">Figure 3
<p>An example of convergence towards the optimal path. The optimizer places <math display="inline"><semantics> <msup> <mi mathvariant="bold-sans-serif">p</mi> <mi>m</mi> </msup> </semantics></math> in different locations in the workspace to generate different paths. The paths explored by the optimizer are indicated in gray. The orange path indicates the output of the path optimizer, resulting from placing the middle waypoint at the location indicated by the blue circle.</p>
Full article ">Figure 4
<p>From left to right: Scenarios 1–3. The orange and red star, respectively, indicate the start and goal positions. The obstacle to be avoided is the bag of tomatoes. Scenario 1 and 2 shared the same starting positions, and Scenario 2 and 3 shared the same obstacle positions. Notice the difference in height of the goal position in Scenario 1 compared to Scenarios 2 and 3.</p>
Full article ">Figure 5
<p>The experimental protocol. Users started with workspace familiarization, then went through the first experiment assessing the performance of the framework in understanding their preferences. Finally, in the last experiment, they provided ground truth demonstrations and evaluated the demonstrated trajectories in adhering to the set of predefined preferences. The numbers indicate the number of demonstrations given, either by the human (training/correction/ground truth) or the robot (experiment). The order in which the dummy trajectories were shown to the users was different in every scenario. The symbol “Q” indicates when participants were provided with questionnaires.</p>
Full article ">Figure 6
<p>Results of the first experiment. (<b>A</b>) Average amount of feedback provided to the system for each task. The dot represents the mean score, the error bars represent the standard deviation, and the crosses indicate individual data points. (<b>B</b>) Results of the Likert questionnaire for the first resulting trajectory in every task (i.e., prior to any additional demonstrations). The error bars correspond to standard deviation.</p>
Full article ">Figure 7
<p>Results of the NASA-TLX questionnaire after the first experiment.</p>
Full article ">Figure 8
<p>Scenario 2 results (second experiment) for a single user. The dummy trajectories, in light and dark blue, are designed not to meet the “height from table” and “obstacle side” preferences, respectively. The green dashed and solid lines are the mean of human ground truth demonstrations and the robot trajectory, respectively. The black sphere represents the obstacle. The framework was trained on data from Scenario 3 and had no access to the ground truth shown.</p>
Full article ">Figure 9
<p>Total feature count errors of each path preference (all participants) with respect to the ground truth (i.e., smaller values for each axis are favored).</p>
Full article ">Figure 10
<p>Result of Likert questionnaire for experiment 2. Crosses indicate individual ratings, while the dots and error bars, respectively, represent the mean and standard deviation. Users clearly recognize and highly rate the output of the framework in terms of adhering to path preferences.</p>
Full article ">Figure 11
<p>Training scenario with the human demonstrated trajectory (green diamonds) and the learned reproductions: ours is represented by dark blue circles; PHI_<math display="inline"><semantics> <msub> <mi mathvariant="italic">ϕ</mi> <mi>orig</mi> </msub> </semantics></math> is represented by red plus signs, with the intermediate learning result represented by dots; PHI_<math display="inline"><semantics> <msub> <mi mathvariant="italic">ϕ</mi> <mi>ours</mi> </msub> </semantics></math> is represented by purple crosses; and DMP is represented by yellow squares. By placing the markers at equal time intervals, we display the velocity of the trajectories (i.e., the closer the markers, the slower the motion). As PHI does not support differences in velocity, all red and purple markers are spaced equally along the trajectory. The black, cyan, blue and green circles, respectively, represent the obstacle, robot, goal (bottom) and start (top) positions. For this study, we set <math display="inline"><semantics> <msub> <mi>d</mi> <mi>c</mi> </msub> </semantics></math> = 22.5 cm (indicated by the dashed circle). We consider points within this region as “close” to the obstacle.</p>
Full article ">Figure 12
<p>We demonstrate the generalization of our method by modifying the goal (<b>top</b>), start (<b>middle</b>) and obstacle (<b>bottom</b>) positions. The yellow, blue and red and purple trajectories correspond, respectively, to the output of the DMPs, our framework and the two versions of PHI. The thickness of the line indicates the inverse of normalized velocity (i.e., the thicker the line, the slower the trajectory).</p>
Full article ">
22 pages, 2155 KiB  
Article
UAV Power Line Tracking Control Based on a Type-2 Fuzzy-PID Approach
by Guilherme A. N. Pussente, Eduardo P. de Aguiar, Andre L. M. Marcato and Milena F. Pinto
Robotics 2023, 12(2), 60; https://doi.org/10.3390/robotics12020060 - 20 Apr 2023
Cited by 9 | Viewed by 2879
Abstract
A challenge for inspecting transmission power lines with Unmanned Aerial Vehicles (UAVs) is to precisely determine their position and orientation, considering that the geo-location of these elements via GPS often needs to be more consistent. Therefore, a viable alternative is to use visual [...] Read more.
A challenge for inspecting transmission power lines with Unmanned Aerial Vehicles (UAVs) is to precisely determine their position and orientation, considering that the geo-location of these elements via GPS often needs to be more consistent. Therefore, a viable alternative is to use visual information from cameras attached to the central part of the UAV, enabling a control technique that allows the lines to be positioned at the center of the image. Therefore, this work proposes a PID (proportional–integral–derivative) controller tuned through interval type-2 fuzzy logic (IT2_PID) for the transmission line follower problem. The PID gains are selected online as the position and orientation errors and their respective derivatives change. The methodology was built in Python with the Robot Operating System (ROS) interface. The key point of the proposed methodology is its easy reproducibility, since the designed control loop does not require the mathematical model of the UAV. The tests were performed using the Gazebo simulator. The outcomes demonstrated that the proposed type-2 fuzzy variant displayed lower error values for both stabilization tests (keeping the UAV centered and oriented with the lines) and the following step in which the trajectory is time-variant, compared to the analogous T1_PID control and a classical PID controller tuned by the Zigler–Nichols method. Full article
(This article belongs to the Special Issue UAV Systems and Swarm Robotics)
Show Figures

Figure 1

Figure 1
<p>Lateral view of the line-following problem.</p>
Full article ">Figure 2
<p>Superior view of the line-following problem.</p>
Full article ">Figure 3
<p>Extraction of errors using visual information.</p>
Full article ">Figure 4
<p>Visual information extracted from the image.</p>
Full article ">Figure 5
<p>Trajectory estimation based on the RANSAC algorithm.</p>
Full article ">Figure 6
<p>IT2_FLS block diagram.</p>
Full article ">Figure 7
<p>Schematic of the IT2_PID controller.</p>
Full article ">Figure 8
<p>IT2_PID_v membership functions.</p>
Full article ">Figure 9
<p>IT2_PID_r membership functions.</p>
Full article ">Figure 10
<p>Simulated scenario in Gazebo.</p>
Full article ">Figure 11
<p>Comparison of the presented approaches-lateral error.</p>
Full article ">Figure 12
<p>Comparison of the accumulated lateral error.</p>
Full article ">Figure 13
<p>Comparison of the lateral error for different fuzzy rule sets.</p>
Full article ">Figure 14
<p>Comparison of the presented approaches—angular error.</p>
Full article ">Figure 15
<p>Comparison of the accumulated angular error.</p>
Full article ">Figure 16
<p>Comparison of the error for a transmission line inspection mission with fuzzy-PID—lateral control.</p>
Full article ">Figure 17
<p>Comparison of the error for transmission line inspection mission with fuzzy-PID—angular control.</p>
Full article ">Figure 18
<p>Accumulated error evolution for the transmission line inspection mission simulation—lateral control.</p>
Full article ">Figure 19
<p>Accumulated error evolution for the transmission line inspection mission simulation—angular control.</p>
Full article ">
19 pages, 6741 KiB  
Article
Tunable Adhesion of Shape Memory Polymer Dry Adhesive Soft Robotic Gripper via Stiffness Control
by ChangHee Son, Subin Jeong, Sangyeop Lee, Placid M. Ferreira and Seok Kim
Robotics 2023, 12(2), 59; https://doi.org/10.3390/robotics12020059 - 17 Apr 2023
Cited by 11 | Viewed by 4012
Abstract
A shape memory polymer (SMP) has been intensively researched in terms of its exceptional reversible dry adhesive characteristics and related smart adhesive applications over the last decade. However, its unique adhesive properties have rarely been taken into account for other potential applications, such [...] Read more.
A shape memory polymer (SMP) has been intensively researched in terms of its exceptional reversible dry adhesive characteristics and related smart adhesive applications over the last decade. However, its unique adhesive properties have rarely been taken into account for other potential applications, such as robotic pick-and-place, which might otherwise improve robotic manipulation and contribute to the related fields. This work explores the use of an SMP to design an adhesive gripper that picks and places a target solid object employing the reversible dry adhesion of an SMP. The numerical and experimental results reveal that an ideal compositional and topological SMP adhesive design can significantly improve its adhesion strength and reversibility, leading to a strong grip force and a minimal release force. Next, a radially averaged power spectrum density (RAPSD) analysis proves that active heating and cooling with a thermoelectric Peltier module (TEC) substantially enhances the conformal adhesive contact of an SMP. Based on these findings, an adhesive gripper is designed, fabricated, and tested. Remarkably, the SMP adhesive gripper interacts not only with flat and smooth dry surfaces, but also moderately rough and even wet surfaces for pick-and-place, showing high adhesion strength (>2 standard atmospheres) which is comparable to or exceeds those of other single-surface contact grippers, such as vacuum, electromagnetic, electroadhesion, and gecko grippers. Lastly, the versatility and utility of the SMP adhesive gripper are highlighted through diverse pick-and-place demonstrations. Associated studies on physical mechanisms, SMP adhesive mechanics, and thermal conditions are also presented. Full article
(This article belongs to the Section Soft Robotics)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Temperature dependence of the storage moduli of stiff and soft shape memory polymers (SMPs); (<b>b</b>) a reversible dry adhesive hook employing the SMP.</p>
Full article ">Figure 2
<p>A test setup to measure the adhesion strength of an SMP sample. While an SMP adheres to the target plate, the bucket is filled with water. The adhesion strength of the SMP is measured once it fails to adhere to the target plate.</p>
Full article ">Figure 3
<p>(<b>a</b>) The fabrication of a single SMP sample. A 25 mm diameter circular form is carved out of a cured thick SMP; (<b>b</b>) the fabrication of a dual SMP sample; after curing and cutting a soft SMP into a ring form, a stiff SMP precursor is filled and cured inside the ring-shaped soft SMP; (<b>c</b>) the fabrication of a release tip SMP sample; a single drop of SMP precursor is deposited on the SMP surface and cured to form a release tip on a 25 mm diameter SMP sample.</p>
Full article ">Figure 4
<p>(<b>a</b>) The schematic dual SMP sample attached to the aluminum block with its boundary condition and the FEA plot of the first principal stress along the A−A’ line; (<b>b</b>) the schematic SMP with a release tip attached to the aluminum block with the boundary condition and the FEA plot of the first principal stress along the B−B’ line.</p>
Full article ">Figure 5
<p>(<b>a</b>) The maximal adhesion strength of single and dual SMPs, as tested experimentally; (<b>b</b>) adhesion strength of SMPs with and without release tip in glassy and rubbery states, as evaluated experimentally; the error bar represents the maximum and minimum of the three tested results.</p>
Full article ">Figure 6
<p>The fabrication of a bi-layer SMP sample. A 0.5 mm thick soft SMP is cured on a glass slide. Then, 2 mm thick stiff SMP is cured on top of the soft SMP. After being fully cured, the bi-layer SMP is cut using a laser cutter in a 25 mm diameter circular form.</p>
Full article ">Figure 7
<p>The force-distance curves for SMP samples. The areas under the curves, which indicate the work for the single-layer SMP and bi-layer SMP, are 631 Nmm and 682 Nmm, respectively.</p>
Full article ">Figure 8
<p>The schematic of a test setup with a thermoelectric Peltier module (TEC) for measuring the adhesion force at failure and the temperature at the center of a backing aluminum (BA) and at the interface between the SMP and the adherend (CI).</p>
Full article ">Figure 9
<p>The adhesion force that is formed using a thermoelectric module (TEC) and a hotplate (HP) as heating methods for four different adherend materials which are acrylic, wood, glass, and aluminum. The three lines of the error bar represent the maximum, median, and minimum values, respectively, from top to bottom.</p>
Full article ">Figure 10
<p>The temperature profiles at the center of the backing aluminum (BA) and at the interface between the SMP and the adherend (CI). The solid lines indicate the experimental results (EXP), and the dashed lines represent the finite element analysis results (FEA). Two different cases are tested: one using the thermoelectric module (TEC) for both heating and cooling, and another using only a hotplate as a heating method (HP). Four plots show the results from different adherend materials which are acrylic, wood, glass, and aluminum. The background of each plot is a gradient filled with colors based on the state of the SMP (rubbery or glassy state).</p>
Full article ">Figure 11
<p>Radially averaged Fourier power spectrum data from 2-dimensional raw roughness height data for different adherend materials including acrylic (<b>a</b>), wood (<b>b</b>), glass (<b>c</b>), and aluminum (<b>d</b>). The black line indicates the spectrum of the target adherend. The red and blue lines individually indicate the spectra of the SMP that has shape adapted to the target adherend using the thermoelectric Peltier module (TEC) and hotplate (HP), respectively. The green line indicates the spectrum for the flat state of the SMP before adhering which is identical for all (<b>a</b>–<b>d</b>).</p>
Full article ">Figure 12
<p>(<b>a</b>) A computer aided design (CAD) drawing and photograph of an SMP adhesive gripper. A battery and thermocouples are included in the gripper for heating/cooling and temperature sensing. (<b>b</b>) The SMP adhesive gripper is used to demonstrate pick-and-place functionality. (<b>c</b>) Images show the SMP adhesive gripper picking up sandpaper, a wooden plate, a tile, poster paper, and an angled acrylic plate, as well as an acrylic plate wet with blue-dyed water.</p>
Full article ">Figure 13
<p>Optical microscope images and surface roughness profiles of (<b>a</b>) sandpaper, (<b>b</b>) a wooden plate, (<b>c</b>) a tile, (<b>d</b>) poster paper, and (<b>e</b>) an acrylic plate used in the picking demonstration.</p>
Full article ">Figure 14
<p>The free body diagram of the foot of an SMP adhesive gripper: (<b>a</b>) forces during preloading; (<b>b</b>) forces during picking.</p>
Full article ">Figure 15
<p>(<b>a</b>) The magnified view of a pin-in-slot joint used between the links; (<b>b</b>) the schematic of a SMP adhesive gripper and the free body diagram of the links.</p>
Full article ">Figure 16
<p>The temperature profile of an SMP adhesive gripper foot during heating and cooling with a TEC as a function of time.</p>
Full article ">
17 pages, 5039 KiB  
Article
Development of Serious Games for the Rehabilitation of the Human Vertebral Spine for Home Care
by Rogério Sales Gonçalves, Rodrigo Alves Prado, Guênia Mara Vieira Ladeira and Andréa Licre Pessina Gasparini
Robotics 2023, 12(2), 58; https://doi.org/10.3390/robotics12020058 - 12 Apr 2023
Cited by 2 | Viewed by 2734
Abstract
With the occurrence of pandemics, such as COVID-19, which lead to social isolation, there is a need for home rehabilitation procedures without the direct supervision of health professionals. The great difficulty of treatment at home is the cost of the conventional equipment and [...] Read more.
With the occurrence of pandemics, such as COVID-19, which lead to social isolation, there is a need for home rehabilitation procedures without the direct supervision of health professionals. The great difficulty of treatment at home is the cost of the conventional equipment and the need for specialized labor to operate it. Thus, this paper aimed to develop serious games to assist health professionals in the physiotherapy of patients with spinal pain for clinical and home applications. Serious games integrate serious aspects such as teaching, rehabilitation, and information with the playful and interactive elements of video games. Despite the positive indication and benefits of physiotherapy for cases of chronic spinal pain, the long treatment time, social isolation due to pandemics, and lack of motivation to use traditional methods are some of the main causes of therapeutic failure. Using Unity 3D (version 2019.4.24f1) software and a personal computer with a webcam, we developed aesthetically pleasing, smooth, and attractive games, while maintaining the essence of seriousness that is required for rehabilitation. The serious games, controlled using OpenPose (version v1.0.0alpha-1.5.0) software, were tested with a healthy volunteer. The findings demonstrated that the proposed games can be used as a playful tool to motivate patients during physiotherapy and to reduce cases of treatment abandonment, including at home. Full article
(This article belongs to the Special Issue Service Robotics against COVID-2019 Pandemic)
Show Figures

Figure 1

Figure 1
<p>OpenPose keypoints of the human body skeleton.</p>
Full article ">Figure 2
<p>Physical therapy exercises used in the games. (<b>a</b>) lumbar extension in the lying position; (<b>b</b>) lumbar extension in the upright position; (<b>c</b>) lumbar flexion in the upright position; (<b>d</b>) hip extension; (<b>e</b>) left lateral flexion; (<b>f</b>) right lateral flexion.</p>
Full article ">Figure 3
<p>Main screen of the asteroids game with the game running.</p>
Full article ">Figure 4
<p>Controlling the game avatar and destroying asteroids through column movements.</p>
Full article ">Figure 5
<p>Main screen of the park day game.</p>
Full article ">Figure 6
<p>Moving the park day game avatar by moving the column. (<b>a</b>) lumbar extension; (<b>b</b>) lumbar flexion.</p>
Full article ">Figure 6 Cont.
<p>Moving the park day game avatar by moving the column. (<b>a</b>) lumbar extension; (<b>b</b>) lumbar flexion.</p>
Full article ">Figure 7
<p>Main screen of the fishing game.</p>
Full article ">Figure 8
<p>(<b>a</b>) First position of the fishing game. (<b>b</b>) Flexion movement.</p>
Full article ">Figure 8 Cont.
<p>(<b>a</b>) First position of the fishing game. (<b>b</b>) Flexion movement.</p>
Full article ">Figure 9
<p>Main screen of the infinite run game.</p>
Full article ">Figure 10
<p>Moving the infinite run game avatar through column movement.</p>
Full article ">Figure 11
<p>(<b>a</b>) OpenPose data; (<b>b</b>) Miotec goniometer data for the infinite run game.</p>
Full article ">Figure 12
<p>(<b>a</b>) Graph of the position over time, associated with the score obtained, A-Lying, B-Stage 1 lumbar extension, and C-Stage 2 lumbar extension in the asteroids game. (<b>b</b>) Graph of position and the distance traveled, as associated with the score achieved, A-Left lateral flexion, B-Center, and C-Right lateral flexion in the infinite run game.</p>
Full article ">
13 pages, 9391 KiB  
Article
Tractor-Robot Cooperation: A Heterogeneous Leader-Follower Approach
by El Houssein Chouaib Harik
Robotics 2023, 12(2), 57; https://doi.org/10.3390/robotics12020057 - 6 Apr 2023
Cited by 6 | Viewed by 2548
Abstract
In this paper, we investigated the idea of including mobile robots as complementary machinery to tractors in an agricultural context. The main idea is not to replace the human farmer, but to augment his/her capabilities by deploying mobile robots as assistants in field [...] Read more.
In this paper, we investigated the idea of including mobile robots as complementary machinery to tractors in an agricultural context. The main idea is not to replace the human farmer, but to augment his/her capabilities by deploying mobile robots as assistants in field operations. The scheme is based on a leader–follower approach. The manned tractor is used as a leader, which will be taken as a reference point for a follower. The follower then takes the position of the leader as a target, and follows it in an autonomous manner. This will allow the farmer to multiply the working width by the number of mobile robots deployed during field operations. In this paper, we present a detailed description of the system, the theoretical aspect that allows the robot to autonomously follow the tractor, in addition to the different experimental steps that allowed us to test the system in the field to assess the robustness of the proposed scheme. Full article
(This article belongs to the Special Issue Robotics and AI for Precision Agriculture)
Show Figures

Figure 1

Figure 1
<p>Solectrac electrical tractor (the leader).</p>
Full article ">Figure 2
<p>The equipment installed on the tractor: The Jetson AGX Xavier, the Piksi-multi GNSS-RTK board, and the wireless router. The IMU and the GNSS receiver are mounted on the chassis of the tractor.</p>
Full article ">Figure 3
<p>The electrical robot (the follower).</p>
Full article ">Figure 4
<p>The dual GNSS-RTK receivers added to the robot for a robust heading estimation.</p>
Full article ">Figure 5
<p>The configuration of the leader-follower scheme.</p>
Full article ">Figure 6
<p>The evolution of both the distance and orientation separating the robot from the tractor during the static waypoint experiments. The robot stopped at 6.2 m and 5.7 m, respectively, when beginning the navigation from the two starting positions. (<b>a</b>) Static waypoint navigation: starting position 1. (<b>b</b>) Static waypoint navigation: starting position 2.</p>
Full article ">Figure 7
<p>The recorded GNSS trajectory of the robot during the tuning of the controller gains. The black trajectory (long dashed line) represents the path that the robot followed starting from “start pose 1”. The red trajectory (short dashed line) represents the path that the robot followed starting from “start pose 2”. The green “x” represents the fixed location of the tractor.</p>
Full article ">Figure 8
<p>The evolution over time of both the distance and orientation separating the robot from the tractor during the static waypoint experiments. The robot stopped at 6.2 m and 5.7 m, respectively, when beginning its navigation from the two starting positions. (<b>a</b>) Distance: start pose 1. (<b>b</b>) Orientation: start pose 1. (<b>c</b>) Distance: start pose 2. (<b>d</b>) Orientation: start pose 2.</p>
Full article ">Figure 9
<p>An aerial imagery of the leader–follower navigation inside the field. The sequences illustrate how the robot followed the tractor during a U-turn.</p>
Full article ">Figure 10
<p>The path of the robot and its reference point to the tractor.</p>
Full article ">Figure 11
<p>Leader-follower configuration during field experiments.</p>
Full article ">Figure 12
<p>Leader-followerconfiguration during field experiments.</p>
Full article ">
19 pages, 1855 KiB  
Article
A Simplified Kinematics and Kinetics Formulation for Prismatic Tensegrity Robots: Simulation and Experiments
by Azamat Yeshmukhametov and Koichi Koganezawa
Robotics 2023, 12(2), 56; https://doi.org/10.3390/robotics12020056 - 3 Apr 2023
Cited by 3 | Viewed by 2753
Abstract
Tensegrity robots offer several advantageous features, such as being hyper-redundant, lightweight, shock-resistant, and incorporating wire-driven structures. Despite these benefits, tensegrity structures are also recognized for their complexity, which presents a challenge when addressing the kinematics and dynamics of tensegrity robots. Therefore, this research [...] Read more.
Tensegrity robots offer several advantageous features, such as being hyper-redundant, lightweight, shock-resistant, and incorporating wire-driven structures. Despite these benefits, tensegrity structures are also recognized for their complexity, which presents a challenge when addressing the kinematics and dynamics of tensegrity robots. Therefore, this research paper proposes a new kinematic/kinetic formulation for tensegrity structures that differs from the classical matrix differential equation framework. The main contribution of this research paper is a new formulation, based on vector differential equations, which can be advantageous when it is convenient to use a smaller number of state variables. The limitation of the proposed kinematics and kinetic formulation is that it is only applicable for tensegrity robots with prismatic structures. Moreover, this research paper presents experimentally validated results of the proposed mathematical formulation for a six-bar tensegrity robot. Furthermore, this paper offers an empirical explanation of the calibration features required for successful experiments with tensegrity robots. Full article
Show Figures

Figure 1

Figure 1
<p>Prismatic tensegrity robot structures: (<b>a</b>) single segment triangular prismatic structure, (<b>b</b>) single segment quadrangular prismatic tensegrity, and (<b>c</b>) dual segment triangular prismatic structure with two layers.</p>
Full article ">Figure 2
<p>The computer-aided design model of the prismatic tensegrity robot with triangular base.</p>
Full article ">Figure 3
<p>Annotated schematics of the prismatic tensegrity robot. Thick blue, orange, and green solid lines are solid bars. Dotted orange lines are driving wires; dotted green lines are passive wires. The solid blue line is the saddle wire. Base and end-plate nodes are blue-filled circles. Black and orange-filled circles are middle nodes (universal joints).</p>
Full article ">Figure 4
<p>Tensegrity robot experimental setup with zoomed components inside the motion capture area.</p>
Full article ">Figure 5
<p>Illustration that shows the terms used for the computation of the torque vector <math display="inline"><semantics> <msub> <mi>T</mi> <mn>01</mn> </msub> </semantics></math>.</p>
Full article ">Figure 6
<p>Initial posture of the tensegrity robot.</p>
Full article ">Figure 7
<p>Motor positions, velocities, and torques for the considered motion experiment.</p>
Full article ">Figure 8
<p>Tensegrity robot control environment.</p>
Full article ">Figure 9
<p>Comparison between configurations at different time instants of the considered experiment for simulator and actual tensegrity robot.</p>
Full article ">Figure 10
<p>Tensegrity robot node trajectories provided from the Optitrack motion capture cameras.</p>
Full article ">Figure 11
<p>Comparison graph of experimental data (in blue) with simulation data (in green) for the time evolution of the <span class="html-italic">x</span> and <span class="html-italic">y</span> coordinates of node <math display="inline"><semantics> <msub> <mi>n</mi> <mn>21</mn> </msub> </semantics></math>.</p>
Full article ">
13 pages, 4197 KiB  
Article
Heart Rate as a Predictor of Challenging Behaviours among Children with Autism from Wearable Sensors in Social Robot Interactions
by Ahmad Qadeib Alban, Ahmad Yaser Alhaddad, Abdulaziz Al-Ali, Wing-Chee So, Olcay Connor, Malek Ayesh, Uvais Ahmed Qidwai and John-John Cabibihan
Robotics 2023, 12(2), 55; https://doi.org/10.3390/robotics12020055 - 1 Apr 2023
Cited by 10 | Viewed by 3627
Abstract
Children with autism face challenges in various skills (e.g., communication and social) and they exhibit challenging behaviours. These challenging behaviours represent a challenge to their families, therapists, and caregivers, especially during therapy sessions. In this study, we have investigated several machine learning techniques [...] Read more.
Children with autism face challenges in various skills (e.g., communication and social) and they exhibit challenging behaviours. These challenging behaviours represent a challenge to their families, therapists, and caregivers, especially during therapy sessions. In this study, we have investigated several machine learning techniques and data modalities acquired using wearable sensors from children with autism during their interactions with social robots and toys in their potential to detect challenging behaviours. Each child wore a wearable device that collected data. Video annotations of the sessions were used to identify the occurrence of challenging behaviours. Extracted time features (i.e., mean, standard deviation, min, and max) in conjunction with four machine learning techniques were considered to detect challenging behaviors. The heart rate variability (HRV) changes have also been investigated in this study. The XGBoost algorithm has achieved the best performance (i.e., an accuracy of 99%). Additionally, physiological features outperformed the kinetic ones, with the heart rate being the main contributing feature in the prediction performance. One HRV parameter (i.e., RMSSD) was found to correlate with the occurrence of challenging behaviours. This work highlights the importance of developing the tools and methods to detect challenging behaviors among children with autism during aided sessions with social robots. Full article
(This article belongs to the Special Issue Towards Socially Intelligent Robots)
Show Figures

Figure 1

Figure 1
<p>An overview of the adopted methodology in this study. (<b>a</b>) The Empatica E4 wearable device. (<b>b</b>) One of the children interacting with the social robots (see <a href="https://youtu.be/sGNslV2Yuks" target="_blank">https://youtu.be/sGNslV2Yuks</a> (accessed on 20 March 2023)). (<b>c</b>) A sample of the acquired data using the wearable device.</p>
Full article ">Figure 2
<p>The evaluation metrics results for the three categories using the best performing algorithm (i.e., XGBoost).</p>
Full article ">Figure 3
<p>The contributing features on the performance of the best prediction algorithm (i.e., XGBoost) for each child. (<b>a</b>) Child 1. (<b>b</b>) Child 2. (<b>c</b>) Child 3. (<b>d</b>) Child 4. (<b>e</b>) Child 5.</p>
Full article ">Figure 4
<p>The contributing features on the performance of the best prediction algorithm (i.e., XGBoost) for the combined model.</p>
Full article ">Figure 5
<p>The changes in the HRV (i.e., RMSSD) corresponding to different states. (<b>a</b>) The child is overwhelmed and stimulated by the bubble gun toy. (<b>b</b>) The child is in a rest state. (<b>c</b>) The child experiences a challenging behaviour.</p>
Full article ">Figure 6
<p>The importance of HRV in the performance of the machine learning model (i.e., XGBoost). (<b>a</b>) Represents the feature importance for one of the participants whose challenging behaviours were more frequent and intense. (<b>b</b>) The feature importance for another participant who displayed less challenging behaviours.</p>
Full article ">
19 pages, 4561 KiB  
Article
Design and Evaluation of an Intuitive Haptic Teleoperation Control System for 6-DoF Industrial Manipulators
by Ivo Dekker, Karel Kellens and Eric Demeester
Robotics 2023, 12(2), 54; https://doi.org/10.3390/robotics12020054 - 1 Apr 2023
Cited by 10 | Viewed by 3583
Abstract
Industrial robots are capable of performing automated tasks repeatedly, reliably and accurately. However, in some scenarios, human-in-the-loop control is required. In this case, having an intuitive system for moving the robot within the working environment is crucial. Additionally, the operator should be aided [...] Read more.
Industrial robots are capable of performing automated tasks repeatedly, reliably and accurately. However, in some scenarios, human-in-the-loop control is required. In this case, having an intuitive system for moving the robot within the working environment is crucial. Additionally, the operator should be aided by sensory feedback to obtain a user-friendly robot control system. Haptic feedback is one way of achieving such a system. This paper designs and assesses an intuitive teleoperation system for controlling an industrial 6-DoF robotic manipulator using a Geomagic Touch haptic interface. The system utilises both virtual environment-induced and physical sensor-induced haptic feedback to provide the user with both a higher amount of environmental awareness and additional safety while manoeuvering the robot within its working area. Different tests show that the system is capable of fully stopping the manipulator without colliding with the environment, and preventing it from entering singularity states with Cartesian end effector velocities of up to 0.25 m/s. Additionally, an operator is capable of executing low-tolerance end effector positioning tasks (∼0.5 mm) with high-frequency control of the robot (∼100 Hz). Fourteen inexperienced volunteers were asked to perform a typical object removal and writing task to gauge the intuitiveness of the system. It was found that when repeating the same test for a second time, the participants performed 22.2% faster on average. The results for the second attempt also became significantly more consistent between participants, as the inter quartile range dropped by 82.7% (from 52 s on the first attempt to 9 s on the second). Full article
(This article belongs to the Special Issue Immersive Teleoperation and AI)
Show Figures

Figure 1

Figure 1
<p>Diagram of the full teleoperation control system. The haptic interface and host computer keyboard are processed using their drivers. This input is processed further into twist-style movement commands that the robot driver can understand.</p>
Full article ">Figure 2
<p>The Geomagic Touch haptic interface.</p>
Full article ">Figure 3
<p>Heads-up display (HUD) showing the available control modes of the teleoperation system. Using the up and down arrow keys, either the proportional or velocity control mode can be selected with translations/rotations (un)locked. The left and right arrow key allow the user to alter the maximum allowed velocity. With the space bar, the vacuum of the tool can be enabled and disabled. The R key blocks all joints except for the last for pure rotations around the flange. With the Z key, everything is blocked except translations on the z-axis of the tool. “A” disables the haptic assist. Lastly, “T” toggles between known end-effectors.</p>
Full article ">Figure 4
<p>Normal data extraction from the collision surface (green surface) as the manipulator moves nearby. The nearest points between the robot and the surface are visualised as blue spheres, the cast ray from Bullet as a red line and the surface normal as a green arrow as the robot moves over the edge of the green box.</p>
Full article ">Figure 5
<p>Measured minimum distance values required to ensure a full stop of the robot at varying velocities and the resulting distance threshold function.</p>
Full article ">Figure 6
<p>The currently attached tool of the robot (purple) is allowed to enter the allocated restricted zone (blue) with added motion constraints, while the rest of the robot is not.</p>
Full article ">Figure 7
<p>Example of a force feedback function according to Formula (<a href="#FD11-robotics-12-00054" class="html-disp-formula">11</a>) with a lambda value of −25.80 according to Formula (<a href="#FD12-robotics-12-00054" class="html-disp-formula">12</a>) with a distance threshold value of 0.1 m.</p>
Full article ">Figure 8
<p>End effector velocity response of the real robot (cyan) when approaching a singularity state compared to the input signal from the continuous state monitor system (orange).</p>
Full article ">Figure 9
<p>End effector velocity response of the real robot (cyan) when approaching a collision state compared to the input signal from the continuous state monitor system (orange).</p>
Full article ">Figure 10
<p>Execution of the object removal task with a specific robot tool using vacuum.</p>
Full article ">Figure 11
<p>Bar chart showing the times for each participant per attempt with the overall average per attempt (<b>left</b>) and the improvement% of each participant after the second attempt with the overall percentage of improvement (<b>right</b>).</p>
Full article ">Figure 12
<p>Boxplot representation of the required times of the participants for both attempts.</p>
Full article ">Figure 13
<p>Tool used for the writing task (<b>top</b>) and result of both attempts for a single participant with their handwritten examples (<b>bottom</b>).</p>
Full article ">Figure 14
<p>Pie chart showing the responses to the Likert-questionnaire with the percentages of answers in the ‘Very negative’, ‘Negative’, ‘Positive’ and ‘Very positive’ categories.</p>
Full article ">Figure 15
<p>Pie chart showing the overall reception of the teleoperation system for the individual object-removal task (<b>left</b>) and writing task (<b>right</b>).</p>
Full article ">Figure A1
<p>Participants’ writing results.</p>
Full article ">
24 pages, 14295 KiB  
Review
A Survey on Open-Source Simulation Platforms for Multi-Copter UAV Swarms
by Ziming Chen, Jinjin Yan, Bing Ma, Kegong Shi, Qiang Yu and Weijie Yuan
Robotics 2023, 12(2), 53; https://doi.org/10.3390/robotics12020053 - 1 Apr 2023
Cited by 3 | Viewed by 7643
Abstract
Simulation platforms are critical and indispensable tools for application developments of unmanned aerial vehicles (UAVs) because the UAVs are generally costly, have certain requirements for the test environment, and need professional licensed operators. Thus, developers prefer (or have) to test their applications on [...] Read more.
Simulation platforms are critical and indispensable tools for application developments of unmanned aerial vehicles (UAVs) because the UAVs are generally costly, have certain requirements for the test environment, and need professional licensed operators. Thus, developers prefer (or have) to test their applications on simulation platforms before implementing them on real machines. In the past decades, a considerable number of simulation platforms for robots have been developed, which brings convenience to developers, but also makes them hard to choose a proper one as they are not always familiar with all the features of platforms. To alleviate this dilemma, this paper provides a survey of open-source simulation platforms and employs the simulation of a multi-copter UAV swarm as an example. The survey covers seven widely used simulators, including Webots, Gazebo, CoppeliaSim, ARGoS, MRDS, MORSE, and USARSim. The paper outlines the requirements for multi-copter UAV swarms and shows how to select an appropriate platform. Additionally, the paper presents a case study of a UAV swarm based on Webots. This research will be beneficial to researchers, developers, educators, and engineers who seek suitable simulation platforms for application development, (not only multi-copter UAV swarms but also other types of robots), which further helps them to save expenses for testing, and speed up development progress. Full article
(This article belongs to the Special Issue The State of the Art of Swarm Robotics)
Show Figures

Figure 1

Figure 1
<p>Simulation of a UAV by Webots.</p>
Full article ">Figure 2
<p>The main interface and a simulation example of Gazebo.</p>
Full article ">Figure 3
<p>The main interface and a simulation example of CoppeliaSim.</p>
Full article ">Figure 4
<p>The main interface and a simulation example of by ARGoS.</p>
Full article ">Figure 5
<p>An outdoor scene simulated by MRDS.</p>
Full article ">Figure 6
<p>An UAVs swarm simulated by MORSE.</p>
Full article ">Figure 7
<p>Simulation of UAVs by USARSim.</p>
Full article ">Figure 8
<p>The urban environment designed by Webots.</p>
Full article ">Figure 9
<p>Multi-copter UAVs swarm simulated by Webots.</p>
Full article ">Figure 10
<p>GPS simulation data in Webots.</p>
Full article ">
15 pages, 3917 KiB  
Article
Design of a Novel Haptic Joystick for the Teleoperation of Continuum-Mechanism-Based Medical Robots
by Yiping Xie, Xilong Hou and Shuangyi Wang
Robotics 2023, 12(2), 52; https://doi.org/10.3390/robotics12020052 - 29 Mar 2023
Cited by 8 | Viewed by 3541
Abstract
Continuum robots are increasingly used in medical applications and the master–slave-based architectures are still the most important mode of operation in human–machine interaction. However, the existing master control devices are not fully suitable for either the mechanical mechanism or the control method. This [...] Read more.
Continuum robots are increasingly used in medical applications and the master–slave-based architectures are still the most important mode of operation in human–machine interaction. However, the existing master control devices are not fully suitable for either the mechanical mechanism or the control method. This study proposes a brand-new, four-degree-of-freedom haptic joystick whose main control stick could rotate around a fixed point. The rotational inertia is reduced by mounting all powertrain components on the base plane. Based on the design, kinematic and static models are proposed for position perception and force output analysis, while at the same time gravity compensation is also performed to calibrate the system. Using a continuum-mechanism-based trans-esophageal ultrasound robot as the test platform, a master–slave teleoperation scheme with position–velocity mapping and variable impedance control is proposed to integrate the speed regulation on the master side and the force perception on the slave side. The experimental results show that the main accuracy of the design is within 1.6°. The workspace of the control sticks is −60° to 110° in pitch angle, −40° to 40° in yaw angle, −180° to 180° in roll angle, and −90° to 90° in translation angle. The standard deviation of force output is within 8% of the full range, and the mean absolute error is 1.36°/s for speed control and 0.055 N for force feedback. Based on this evidence, it is believed that the proposed haptic joystick is a good addition to the existing work in the field with well-developed and effective features to enable the teleoperation of continuum robots for medical applications. Full article
(This article belongs to the Special Issue Immersive Teleoperation and AI)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The 4-DOF flexible trans-esophageal ultrasound probe and (<b>b</b>) the add-on TEE robot.</p>
Full article ">Figure 2
<p>The mechanical structure of the proposed haptic joystick.</p>
Full article ">Figure 3
<p>The kinematic illustration and the coordinate definition of the proposed haptic device.</p>
Full article ">Figure 4
<p>Static torque analysis for the proposed haptic joystick.</p>
Full article ">Figure 5
<p>Diagram of the haptic device control scheme.</p>
Full article ">Figure 6
<p>The workspace of the joystick: (<b>a</b>) the tip position of the control stick in Cartesian space; (<b>b</b>) the pitch and yaw angles in joint space.</p>
Full article ">Figure 7
<p>Experimental setup for the joystick torque output performance test.</p>
Full article ">Figure 8
<p>Results of the accuracy test of the force output experiments for (<b>a</b>) pitch force and (<b>b</b>) yaw force.</p>
Full article ">Figure 9
<p>Experimental results of the (<b>a</b>) robot master–slave control and (<b>b</b>) haptic feedback.</p>
Full article ">
15 pages, 1977 KiB  
Article
Positioning Control of Robotic Manipulators Subject to Excitation from Non-Ideal Sources
by Angelo M. Tusset, Amarildo E. B. Pereira, Jose M. Balthazar, Frederic C. Janzen, Clivaldo Oliveira, Maria E. K. Fuziki and Giane G. Lenzi
Robotics 2023, 12(2), 51; https://doi.org/10.3390/robotics12020051 - 27 Mar 2023
Cited by 5 | Viewed by 2177
Abstract
The present work proposes the use of a hybrid controller combining concepts of a PID controller with LQR and a feedforward gain to control the positioning of a 2 DOF robotic arm with flexible joints subject to non-ideal excitations. To characterize the performance [...] Read more.
The present work proposes the use of a hybrid controller combining concepts of a PID controller with LQR and a feedforward gain to control the positioning of a 2 DOF robotic arm with flexible joints subject to non-ideal excitations. To characterize the performance of the controls, two cases were studied. The first case considered the positioning control of the two links in fixed positions, while the second case considered the situation in which the second link is in rotational movement and the first one stays in a fixed position, representing a system with a non-ideal excitation source. In addition to the second case, the sensitivity of the proposed controls for changes in the length and mass of the second link in the rotational movement was analyzed. The results of the simulations showed the effectiveness of the controls, demonstrating that the PID control combined with feedforward gain provides the lowest error for both cases studied; however, it is sensitive to variations in the mass of the second link, in the case of rotational movements. The numerical results also revealed the effectiveness of the PD control obtained by LQR, presenting results similar to the PID control combined with feedforward gain, demonstrating the importance of the optimal control design. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Example of a robotic manipulator with a non-ideal excitation source. (<b>b</b>) Representation of the non-ideal system using a DC motor with an unbalanced mass.</p>
Full article ">Figure 2
<p>Schematic of the manipulator with flexible joints.</p>
Full article ">Figure 3
<p>Positioning error. (<b>a</b>) PD control. (<b>b</b>) PD + feedforward control. (<b>c</b>) PDI + feedforward control.</p>
Full article ">Figure 4
<p>Positioning error for the case of <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> [kg] and <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics></math> [m]. (<b>a</b>) PD control. (<b>b</b>) PD + feedforward control. (<b>c</b>) PDI+ feedforward control.</p>
Full article ">Figure 5
<p>Positioning error for the case of <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> [kg] and <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math> [m]. (<b>a</b>) PD control. (<b>b</b>) PD + feedforward control. (<b>c</b>) PDI+ feedforward control.</p>
Full article ">Figure 6
<p>Positioning error for the case of <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math> [kg] and <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math> [m]. (<b>a</b>) PD control. (<b>b</b>) PD + feedforward control. (<b>c</b>) PDI + feedforward control.</p>
Full article ">Figure 7
<p>Positioning error for the case of <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics></math> [kg] and <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math> [m]. (<b>a</b>) PD control. (<b>b</b>) PD + feedforward control. (<b>c</b>) PDI+ feedforward control.</p>
Full article ">
19 pages, 16889 KiB  
Article
Mapping the Tilt and Torsion Angles for a 3-SPS-U Parallel Mechanism
by Swaminath Venkateswaran and Damien Chablat
Robotics 2023, 12(2), 50; https://doi.org/10.3390/robotics12020050 - 24 Mar 2023
Viewed by 3027
Abstract
This article presents the analysis of a parallel mechanism of type 3-SPS-U. The usual singularity approach is carried out with respect to the Euler angles of the universal joint. However, this approach is computationally expensive especially when stacked structures are analyzed. Thus, the [...] Read more.
This article presents the analysis of a parallel mechanism of type 3-SPS-U. The usual singularity approach is carried out with respect to the Euler angles of the universal joint. However, this approach is computationally expensive especially when stacked structures are analyzed. Thus, the positioning of the mobile platform for the mechanism is analyzed using the theory of Tilt and Torsion (T&T). The singularity-free workspace and the tilt limits of the mechanism are disclosed through this method. These workspaces can then be mapped to the Euler angles of the universal joint and the relation between the T&T space and the Euler space is demonstrated and validated in this study. Initially, simulations are performed using the results of singularity analysis to have a comparison between the T&T space and the Euler space. Experimental validation is then carried out on the prototype of the mechanism to perform a circular trajectory, in line with the simulations. The outcome of this study will be helpful for the integration of the mechanism for a piping inspection robot and also for the analysis of stacked architectures. Full article
(This article belongs to the Special Issue Robotics and Parallel Kinematic Machines)
Show Figures

Figure 1

Figure 1
<p>The 3D model (<b>left</b>) and the 2D view (<b>right</b>) of the rigid bio-inspired piping-inspection robot developed at LS2N, France [<a href="#B10-robotics-12-00050" class="html-bibr">10</a>].</p>
Full article ">Figure 2
<p>The 2D view of the flexible piping-inspection robot with the parallel mechanism [<a href="#B16-robotics-12-00050" class="html-bibr">16</a>].</p>
Full article ">Figure 3
<p>Representation of the (<b>a</b>) parallel mechanism at home pose, (<b>b</b>) 3D view of the correlation to a 3-S<span class="underline">P</span>S-U manipulator and (<b>c</b>) 2D view of the manipulator.</p>
Full article ">Figure 4
<p>Representation of the tilt (<math display="inline"><semantics> <mi>α</mi> </semantics></math>) and azimuth (<math display="inline"><semantics> <mi>β</mi> </semantics></math>) angles on the parallel mechanism under study.</p>
Full article ">Figure 5
<p>Representation of (<b>a</b>) aingularities and workspace zones for the mechanism at <math display="inline"><semantics> <msub> <mi>l</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> = 40 mm and (<b>b</b>) extraction of feasible workspace around home pose at <math display="inline"><semantics> <msub> <mi>l</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> = 40 mm.</p>
Full article ">Figure 6
<p>Postures of the 3-S<span class="underline">P</span>S-U mechanism in the T&amp;T space and the Euler space.</p>
Full article ">Figure 7
<p>Representation of the (<b>a</b>) experimental setup of the mechanism in vertical orientation, (<b>b</b>) experimental setup of the mechanism in horizontal orientation, (<b>c</b>) digital model of the experimental setup and (<b>d</b>) one of the three ESCON 36/2 servo-controllers.</p>
Full article ">Figure 8
<p>Closed-loop PID controller employed for the mechanism [<a href="#B8-robotics-12-00050" class="html-bibr">8</a>].</p>
Full article ">Figure 9
<p>Position of prismatic springs along the circular trajectory in the (<b>a</b>–<b>c</b>): vertical and (<b>d</b>–<b>f</b>): horizontal orientations of the mechanism.</p>
Full article ">Figure 10
<p>Joint position errors for the (<b>a</b>) vertical and (<b>b</b>) horizontal orientations of the mechanism.</p>
Full article ">Figure 11
<p>Motor torques generated on each motor during operation in (<b>a</b>) vertical and (<b>b</b>) horizontal orientations of the mechanism.</p>
Full article ">
21 pages, 2304 KiB  
Article
RV4JaCa—Towards Runtime Verification of Multi-Agent Systems and Robotic Applications
by Debora C. Engelmann, Angelo Ferrando, Alison R. Panisson, Davide Ancona, Rafael H. Bordini and Viviana Mascardi
Robotics 2023, 12(2), 49; https://doi.org/10.3390/robotics12020049 - 24 Mar 2023
Cited by 7 | Viewed by 2348
Abstract
This paper presents a Runtime Verification (RV) approach for Multi-Agent Systems (MAS) using the JaCaMo framework. Our objective is to bring a layer of security to the MAS. This is achieved keeping in mind possible safety-critical uses of the MAS, such as robotic [...] Read more.
This paper presents a Runtime Verification (RV) approach for Multi-Agent Systems (MAS) using the JaCaMo framework. Our objective is to bring a layer of security to the MAS. This is achieved keeping in mind possible safety-critical uses of the MAS, such as robotic applications. This layer is capable of controlling events during the execution of the system without needing a specific implementation in the behaviour of each agent to recognise the events. In this paper, we mainly focus on MAS when used in the context of hybrid intelligence. This use requires communication between software agents and human beings. In some cases, communication takes place via natural language dialogues. However, this kind of communication brings us to a concern related to controlling the flow of dialogue so that agents can prevent any change in the topic of discussion that could impair their reasoning. The latter may be a problem and undermine the development of the software agents. In this paper, we tackle this problem by proposing and demonstrating the implementation of a framework that aims to control the dialogue flow in a MAS; especially when the MAS communicates with the user through natural language to aid decision-making in a hospital bed allocation scenario. Full article
(This article belongs to the Special Issue Agents and Robots for Reliable Engineered Autonomy 2023)
Show Figures

Figure 1

Figure 1
<p>Overview of the three dimensions of JaCaMo [<a href="#B10-robotics-12-00049" class="html-bibr">10</a>]. Reproduced with permission from authors, Science of Computer Programming, Volume 78, Issue 6; published by Elsevier, 2013.</p>
Full article ">Figure 2
<p>Approach overview for RV in MAS.</p>
Full article ">Figure 3
<p>MAIDS Architecture.</p>
Full article ">Figure 4
<p>System’s interface.</p>
Full article ">Figure 5
<p>Chatbot’s interface.</p>
Full article ">
14 pages, 1390 KiB  
Article
Revolutionizing Social Robotics: A Cloud-Based Framework for Enhancing the Intelligence and Autonomy of Social Robots
by Abdelrahman Osman Elfaki, Mohammed Abduljabbar, Luqman Ali, Fady Alnajjar, Dua’a Mehiar, Ashraf M. Marei, Tareq Alhmiedat and Adel Al-Jumaily
Robotics 2023, 12(2), 48; https://doi.org/10.3390/robotics12020048 - 24 Mar 2023
Cited by 12 | Viewed by 4652
Abstract
Social robots have the potential to revolutionize the way we interact with technology, providing a wide range of services and applications in various domains, such as healthcare, education, and entertainment. However, most existing social robotics platforms are operated based on embedded computers, which [...] Read more.
Social robots have the potential to revolutionize the way we interact with technology, providing a wide range of services and applications in various domains, such as healthcare, education, and entertainment. However, most existing social robotics platforms are operated based on embedded computers, which limits the robot’s capabilities to access advanced AI-based platforms available online and which are required for sophisticated physical human–robot interactions (such as Google Cloud AI, Microsoft Azure Machine Learning, IBM Watson, ChatGPT, etc.). In this research project, we introduce a cloud-based framework that utilizes the benefits of cloud computing and clustering to enhance the capabilities of social robots and overcome the limitations of current embedded platforms. The proposed framework was tested in different robots to assess the general feasibility of the solution, including a customized robot, “BuSaif”, and commercialized robots, “Husky”, “NAO”, and “Pepper”. Our findings suggest that the implementation of the proposed platform will result in more intelligent and autonomous social robots that can be utilized by a broader range of users, including those with less expertise. The present study introduces a novel methodology for augmenting the functionality of social robots, concurrently simplifying their utilization for non-experts. This approach has the potential to open up novel possibilities within the domain of social robotics. Full article
(This article belongs to the Special Issue Social Robots for the Human Well-Being)
Show Figures

Figure 1

Figure 1
<p>GUI of the proposed platform.</p>
Full article ">Figure 2
<p>The backend of the GUI.</p>
Full article ">Figure 3
<p>Meta AI Design.</p>
Full article ">Figure 4
<p>General Design.</p>
Full article ">Figure 5
<p>An overview of the proposed platform.</p>
Full article ">
28 pages, 2248 KiB  
Review
Indoor Positioning Systems of Mobile Robots: A Review
by Jiahao Huang, Steffen Junginger, Hui Liu and Kerstin Thurow
Robotics 2023, 12(2), 47; https://doi.org/10.3390/robotics12020047 - 24 Mar 2023
Cited by 39 | Viewed by 14200
Abstract
Recently, with the in-depth development of Industry 4.0 worldwide, mobile robots have become a research hotspot. Indoor localization has become a key component in many fields and the basis for all actions of mobile robots. This paper screened 147 papers in the field [...] Read more.
Recently, with the in-depth development of Industry 4.0 worldwide, mobile robots have become a research hotspot. Indoor localization has become a key component in many fields and the basis for all actions of mobile robots. This paper screened 147 papers in the field of indoor positioning of mobile robots from 2019 to 2021. First, 12 mainstream indoor positioning methods and related positioning technologies for mobile robots are introduced and compared in detail. Then, the selected papers were summarized. The common attributes and laws were discovered. The development trend of indoor positioning of mobile robots is obtained. Full article
(This article belongs to the Special Issue The State-of-the-Art of Robotics in Europe)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the IMU dead reckoning algorithm.</p>
Full article ">Figure 2
<p>Visual SLAM flowchart.</p>
Full article ">Figure 3
<p>Radio frequency technologies. (<b>a</b>) TOA; (<b>b</b>) TDOA; (<b>c</b>) AOA; (<b>d</b>) RSSI.</p>
Full article ">Figure 4
<p>Word cloud of the 147 paper titles.</p>
Full article ">Figure 5
<p>The number of papers for the 12 localization methods.</p>
Full article ">
24 pages, 4387 KiB  
Article
VEsNA, a Framework for Virtual Environments via Natural Language Agents and Its Application to Factory Automation
by Andrea Gatti and Viviana Mascardi
Robotics 2023, 12(2), 46; https://doi.org/10.3390/robotics12020046 - 21 Mar 2023
Cited by 5 | Viewed by 3003
Abstract
Automating a factory where robots are involved is neither trivial nor cheap. Engineering the factory automation process in such a way that return of interest is maximized and risk for workers and equipment is minimized is hence, of paramount importance. Simulation can be [...] Read more.
Automating a factory where robots are involved is neither trivial nor cheap. Engineering the factory automation process in such a way that return of interest is maximized and risk for workers and equipment is minimized is hence, of paramount importance. Simulation can be a game changer in this scenario but requires advanced programming skills that domain experts and industrial designers might not have. In this paper, we present the preliminary design and implementation of a general-purpose framework for creating and exploiting Virtual Environments via Natural language Agents (VEsNA). VEsNA takes advantage of agent-based technologies and natural language processing to enhance the design of virtual environments. The natural language input provided to VEsNA is understood by a chatbot and passed to an intelligent cognitive agent that implements the logic behind displacing objects in the virtual environment. In the complete VEsNA vision, for which this paper provides the building blocks, the intelligent agent will be able to reason on this displacement and on its compliance with legal and normative constraints. It will also be able to implement what-if analysis and case-based reasoning. Objects populating the virtual environment will include active objects and will populate a dynamic simulation whose outcomes will be interpreted by the cognitive agent; further autonomous agents, representing workers in the factory, will be added to make the virtual environment even more realistic; explanations and suggestions will be passed back to the user by the chatbot. Full article
(This article belongs to the Special Issue Agents and Robots for Reliable Engineered Autonomy 2023)
Show Figures

Figure 1

Figure 1
<p>Example of visionary interaction between a user and VEsNA. On the right, the initial empty scene tagged with number <b>(0)</b> and, below, the scenes resulting from actions operated by VEsNA, according to the dialogue on the left. Numbers are also inserted in the dialogue to better clarify which part of the conversation results into which action and hence into which scene.</p>
Full article ">Figure 2
<p>VEsNA architectural scheme.</p>
Full article ">Figure 3
<p>Sequence Diagram of the execution. The actual functions called change depending on the user input. This figure represents a general skeleton of the interactions among VEsNA components.</p>
Full article ">Figure 4
<p>Computation of global positions.</p>
Full article ">Figure 5
<p>Computation of relative positions.</p>
Full article ">Figure 6
<p>Model in Unity.</p>
Full article ">Figure 7
<p>Example of object addition with global coordinates.</p>
Full article ">Figure 8
<p>Example of object addition with relative coordinates.</p>
Full article ">Figure 9
<p>Example of object removal (<b>above</b>) and result of further interactions (<b>below</b>).</p>
Full article ">Figure 10
<p>Example of employee addition (<b>above</b>) and employee movement (<b>below</b>).</p>
Full article ">
2 pages, 187 KiB  
Editorial
Special Issue on Advances in Industrial Robotics and Intelligent Systems
by António Paulo Moreira, Pedro Neto and Félix Vidal
Robotics 2023, 12(2), 45; https://doi.org/10.3390/robotics12020045 - 20 Mar 2023
Viewed by 1561
Abstract
Robotics and intelligent systems are intricately connected, each exploring their respective capabilities and moving towards a common goal [...] Full article
(This article belongs to the Special Issue Advances in Industrial Robotics and Intelligent Systems)
20 pages, 7213 KiB  
Article
Estimation of Knee Assistive Moment in a Gait Cycle Using Knee Angle and Knee Angular Velocity through Machine Learning and Artificial Stiffness Control Strategy (MLASCS)
by Khemwutta Pornpipatsakul and Nopdanai Ajavakom
Robotics 2023, 12(2), 44; https://doi.org/10.3390/robotics12020044 - 17 Mar 2023
Cited by 5 | Viewed by 3809
Abstract
Nowadays, many people around the world cannot walk perfectly because of their knee problems. A knee-assistive device is one option to support walking for those with low or not enough knee muscle forces. Many research studies have created knee devices with control systems [...] Read more.
Nowadays, many people around the world cannot walk perfectly because of their knee problems. A knee-assistive device is one option to support walking for those with low or not enough knee muscle forces. Many research studies have created knee devices with control systems implementing different techniques and sensors. This study proposes an alternative version of the knee device control system without using too many actuators and sensors. It applies the machine learning and artificial stiffness control strategy (MLASCS) that uses one actuator combined with an encoder for estimating the amount of assistive support in a walking gait from the recorded gait data. The study recorded several gait data and analyzed knee moments, and then trained a k-nearest neighbor model using the knee angle and the angular velocity to classify a state in a gait cycle. This control strategy also implements instantaneous artificial stiffness (IAS), a control system that requires only knee angle in each state to determine the amount of supporting moment. After validating the model via simulation, the accuracy of the machine learning model is around 99.9% with the speed of 165 observers/s, and the walking effort is reduced by up to 60% in a single gait cycle. Full article
(This article belongs to the Topic Intelligent Systems and Robotics)
Show Figures

Figure 1

Figure 1
<p>Human gait cycle (modified from [<a href="#B70-robotics-12-00044" class="html-bibr">70</a>,<a href="#B79-robotics-12-00044" class="html-bibr">79</a>]).</p>
Full article ">Figure 2
<p>Human knee joint angle (<b>a</b>) and moment per body mass (<b>b</b>) in a gait cycle.</p>
Full article ">Figure 3
<p>Positions of markers for collecting data from the Qualisys motion capture system.</p>
Full article ">Figure 4
<p>The experiment room consists of 8 marker detectors, 2 force plates, and 1 camera and the origin and axis of the tests (<b>a</b>), and the position of the marker on the participant (<b>b</b>).</p>
Full article ">Figure 5
<p>The average and the boundary of the maximum and minimum of all post-processed data in (<b>a</b>) the knee angle and (<b>b</b>) the knee angular velocity (knee omega).</p>
Full article ">Figure 6
<p>The free body diagrams of the lower leg and the foot for calculating the knee moment.</p>
Full article ">Figure 7
<p>The average and the boundary of the knee moment per body mass from the calculations.</p>
Full article ">Figure 8
<p>The plot of the nine trial knee angle from post-processed recorded data and knee omega from post-processed and calculated recorded data by Equation (2); the plot can be separated into initial place, final place, initial lift, and final lift states.</p>
Full article ">Figure 9
<p>Knee angle and states of a gait cycle.</p>
Full article ">Figure 10
<p>The validation confusion matrix of the machine learning model.</p>
Full article ">Figure 11
<p>The instantaneous artificial stiffness per body mass (IASPB) path on the knee angle in (<b>a</b>) the initial place state; (<b>b</b>) the final place state; (<b>c</b>) the initial lift state; (<b>d</b>) the final lift state.</p>
Full article ">Figure 12
<p>The concept of artificial stiffness control combined with machine learning.</p>
Full article ">Figure 13
<p>Plots of the average and boundary of knee moment (M<sub>knee</sub>) and remaining knee moment (M<sub>r</sub>), i.e., the moment that is still required for walking after being supported by the device when the percentage of support (<span class="html-italic">n</span>) is 0.7.</p>
Full article ">Figure 14
<p>Comparison of the effort over a gait cycle from recorded walking data and remaining effort over a gait cycle in each trial. Note that the numbers shown at the top of the bars are the total effort of each bar.</p>
Full article ">
37 pages, 817 KiB  
Article
A Broad View on Robot Self-Defense: Rapid Scoping Review and Cultural Comparison
by Martin Cooney, Masahiro Shiomi, Eduardo Kochenborger Duarte and Alexey Vinel
Robotics 2023, 12(2), 43; https://doi.org/10.3390/robotics12020043 - 16 Mar 2023
Cited by 6 | Viewed by 4616
Abstract
With power comes responsibility: as robots become more advanced and prevalent, the role they will play in human society becomes increasingly important. Given that violence is an important problem, the question emerges if robots could defend people, even if doing so might cause [...] Read more.
With power comes responsibility: as robots become more advanced and prevalent, the role they will play in human society becomes increasingly important. Given that violence is an important problem, the question emerges if robots could defend people, even if doing so might cause harm to someone. The current study explores the broad context of how people perceive the acceptability of such robot self-defense (RSD) in terms of (1) theory, via a rapid scoping review, and (2) public opinion in two countries. As a result, we summarize and discuss: increasing usage of robots capable of wielding force by law enforcement and military, negativity toward robots, ethics and legal questions (including differences to the well-known trolley problem), control in the presence of potential failures, and practical capabilities that such robots might require. Furthermore, a survey was conducted, indicating that participants accepted the idea of RSD, with some cultural differences. We believe that, while substantial obstacles will need to be overcome to realize RSD, society stands to gain from exploring its possibilities over the longer term, toward supporting human well-being in difficult times. Full article
(This article belongs to the Special Issue Social Robots for the Human Well-Being)
Show Figures

Figure 1

Figure 1
<p>Could a robot step in to defend a person who is under attack, (<b>a</b>) detection what is happening, (<b>b</b>) applying needed force, and (<b>c</b>) conducting post-hoc deescalation?</p>
Full article ">Figure 2
<p>The process followed for the rapid scoping review.</p>
Full article ">Figure 3
<p>Papers per year.</p>
Full article ">Figure 4
<p>Papers by venue type.</p>
Full article ">Figure 5
<p>The main themes that emerged from our review, represented visually as yellow squares, as well as sub-themes, in purple truncated squares. (Culture was felt to be a separate, overarching theme). Numbers indicate section numbers.</p>
Full article ">Figure 6
<p>Characters used in the animations: (<b>a</b>) human (both attacker and victim), (<b>b</b>) humanoid robot, (<b>c</b>) autonomous vehicle (AV).</p>
Full article ">Figure 7
<p>The animated videos (in each case, the attacker is on the left, and the defender on the right): (<b>a</b>) Embodiment. V1–V3: A human attacker is stopped by a (human/humanoid/AV) defender. V4–V5: A humanoid attacker is stopped by a (human/humanoid) defender. All use the same non-lethal force (pushing). (<b>b</b>) Force. V6–V8: A human attacker using lethal force is stopped by a humanoid defender using (lethal force/pushing/disarming).</p>
Full article ">Figure 8
<p>Questionnaire results.</p>
Full article ">Figure 9
<p>Summary of the statistical differences found.</p>
Full article ">
15 pages, 5634 KiB  
Article
Energy Efficiency of a Wheeled Bio-Inspired Hexapod Walking Robot in Sloping Terrain
by Marek Žák, Jaroslav Rozman and František V. Zbořil
Robotics 2023, 12(2), 42; https://doi.org/10.3390/robotics12020042 - 15 Mar 2023
Cited by 3 | Viewed by 3781
Abstract
Multi-legged robots, such as hexapods, have great potential to navigate challenging terrain. However, their design and control are usually much more complex and energy-demanding compared to wheeled robots. This paper presents a wheeled six-legged robot with five degrees of freedom, that is able [...] Read more.
Multi-legged robots, such as hexapods, have great potential to navigate challenging terrain. However, their design and control are usually much more complex and energy-demanding compared to wheeled robots. This paper presents a wheeled six-legged robot with five degrees of freedom, that is able to move on a flat surface using wheels and switch to gait in rugged terrain, which reduces energy consumption. The novel joint configuration mimics the structure of insect limbs and allows our robot to overcome difficult terrain. The wheels reduce energy consumption when moving on flat terrain and the trochanter joint reduces energy consumption when moving on slopes, extending the operating time and range of the robot. The results of experiments on sloping terrain are presented. It was confirmed that the use of the trochanter joint can lead to a reduction in energy consumption when moving in sloping terrain. Full article
(This article belongs to the Topic Intelligent Systems and Robotics)
Show Figures

Figure 1

Figure 1
<p>Insect leg structure. The limb of an insect consists of five parts—coxa, trochanter, femur, tibia and tarsus. Inspired by [<a href="#B23-robotics-12-00042" class="html-bibr">23</a>].</p>
Full article ">Figure 2
<p>Robot leg structure. The leg has five Dynamixel servomotors. Each servomotor operates one joint. The servomotors are powered and controlled using a combined bus that chains the servomotors together and provides both power and serial line interconnection.</p>
Full article ">Figure 3
<p>Usage of trochanter joint on inclined terrain. (<b>a</b>) Our hexapod robot uses trochanter joints on sloping terrain. The leg is set in parallel to the gravitational force. (<b>b</b>) Common hexapod without trochanter joint. The gravitational force adds load to the coxa joint.</p>
Full article ">Figure 4
<p>The leg coordinate system established for the purposes of inverse kinematic calculations. <math display="inline"><semantics> <msub> <mi>d</mi> <mn>1</mn> </msub> </semantics></math> is coxa length, <math display="inline"><semantics> <msub> <mi>a</mi> <mn>3</mn> </msub> </semantics></math> is femur length, <math display="inline"><semantics> <msub> <mi>a</mi> <mn>4</mn> </msub> </semantics></math> is tibia length, <span class="html-italic">L</span> is the distance between coxa joint and the foot tip, <math display="inline"><semantics> <msub> <mi>L</mi> <mi>t</mi> </msub> </semantics></math> is the distance between femur joint and the foot tip, <math display="inline"><semantics> <msub> <mi>θ</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>θ</mi> <mn>3</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>θ</mi> <mn>4</mn> </msub> </semantics></math> are the angles for coxa, femur and tibia joints, <math display="inline"><semantics> <mi>α</mi> </semantics></math>, <math display="inline"><semantics> <mi>β</mi> </semantics></math> and <math display="inline"><semantics> <mi>γ</mi> </semantics></math> are angles used during inverse kinematic calculations and <math display="inline"><semantics> <msub> <mi>x</mi> <mi>f</mi> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>y</mi> <mi>f</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>z</mi> <mi>f</mi> </msub> </semantics></math> are the foot tip coordinates. Both trochanter and tarsus joints are controlled by a reflexive layer and are, thus, not included in the inverse kinematic calculations. Inspired by [<a href="#B27-robotics-12-00042" class="html-bibr">27</a>].</p>
Full article ">Figure 5
<p>Robot stability during movement. The supporting leg is shown in black, X represents the robot’s center of gravity. (<b>a</b>) The statically-stable robot has a centre of gravity inside a polygon formed by the supporting legs. (<b>b</b>) A statically-unstable robot does not have a center of gravity inside the polygon formed by the supporting legs and, thus, the robot is in danger of falling. (<b>c</b>) The robot balances at the edge of stability, which is expressed by the centre of gravity at the edge of the polygon formed by the supporting legs. The figure is inspired by [<a href="#B32-robotics-12-00042" class="html-bibr">32</a>].</p>
Full article ">Figure 6
<p>Control flow chart of the robot controller. Sensors provide data to the reflexive layer, that can control leg movement directly in case of reflex activation. Sensor data is also sent to the terrain controller, where the data are transformed and used by the gait selector to determine the most appropriate gait for the current terrain. The chosen gait is executed by the leg coordinator, which controls the leg controllers.</p>
Full article ">Figure 7
<p>Robot stances. (<b>a</b>) The default stance. (<b>b</b>) The trochanter joint is not used and the legs have the same height. (<b>c</b>) The trochanter joint is not used, but the leg height is adjusted according to the slope. (<b>d</b>) The angle of rotation of the trochanter joint is adjusted to the slope of the tested terrain.</p>
Full article ">Figure 8
<p>Overall current on different slopes for static stances and movements. (<b>a</b>) Static stances: The graph shows that the stance using the trochanter joint and the stance with different leg heights had almost the same energy requirements. In contrast, the basic stance required more and more energy as the terrain slope increased. (<b>b</b>) Tripod and wheel mode: The movement with the usage of the trochanter joint had a lower power consumption in all measured inclinations. The use of a trochanter joint also reduced the power consumption when using wheels.</p>
Full article ">Figure 9
<p>Charts of all legs and servomotors for 0° and 32° slopes in different static stances. The graphs show the load on each servomotor of each leg in different terrain slopes using basic stance, stance with different leg heights and stance using the trochanter joint. The abbreviations used for the leg names are as follows: FR—front right, FL—front left, MR—middle right, ML—middle left, RR—rear right, RL—rear left.</p>
Full article ">Figure 10
<p>Current during movement in different slopes. (<b>a</b>) Current of the robot when moving using tripod gait with, and without, trochanter joint on 0° and 23° slopes. The chart shows two periods of tripod gait (one period was 1.6 s). Between 0.4 and 2 s, respectively, 2 and 3.6 s, the stance phase was followed by the swing phase of one group of legs and, at the same time, the swing phase was followed by the stance phase of the other group of legs. The maximum speed of the robot using the tripod gait was 0.12 m/s. (<b>b</b>) Current of the robot when moving using wheeled locomotion with, and without, trochanter joint on 0° and 23° slopes. The chart shows the first two seconds of the movement. In the first part, the robot accelerated to its maximum speed and the current gradually increased. In the second part, the speed of the robot no longer changed and the current oscillated around a constant value. The maximum speed of the robot using the wheeled locomotion was 0.2 m/s.</p>
Full article ">
16 pages, 7432 KiB  
Article
Grasping Complex-Shaped and Thin Objects Using a Generative Grasping Convolutional Neural Network
by Jaeseok Kim, Olivia Nocentini, Muhammad Zain Bashir and Filippo Cavallo
Robotics 2023, 12(2), 41; https://doi.org/10.3390/robotics12020041 - 15 Mar 2023
Cited by 1 | Viewed by 3001
Abstract
Vision-based pose detection and grasping complex-shaped and thin objects are challenging tasks. We propose an architecture that integrates the Generative Grasping Convolutional Neural Network (GG-CNN) with depth recognition to identify a suitable grasp pose. First, we construct a training dataset with data augmentation [...] Read more.
Vision-based pose detection and grasping complex-shaped and thin objects are challenging tasks. We propose an architecture that integrates the Generative Grasping Convolutional Neural Network (GG-CNN) with depth recognition to identify a suitable grasp pose. First, we construct a training dataset with data augmentation to train a GG-CNN with only RGB images. Then, we extract a segment of the tool using a color segmentation method and use it to calculate an average depth. Additionally, we apply and evaluate different encoder–decoder models with a GG-CNN structure using the Intersection Over Union (IOU). Finally, we validate the proposed architecture by performing real-world grasping and pick-and-place experiments. Our framework achieves a success rate of over 85.6% for picking and placing seen surgical tools and 90% for unseen surgical tools. We collected a dataset of surgical tools and validated their pick and place with different GG-CNN architectures. In the future, we aim to expand the dataset of surgical tools and improve the accuracy of the GG-CNN. Full article
(This article belongs to the Section Medical Robotics and Service Robotics)
Show Figures

Figure 1

Figure 1
<p>Overview of our proposed architecture: From the Kinect, an RGB image is cropped by the network to obtain a 300 × 300 image as input to the GG-CNN/GG-CNN2 network for tool detection, grasping position (x and y), and orientating calculation. The cropped image is also used as input to the color segmentation method to obtain tool segmentation, and the segmentation is used to calculate average depth. We used the OpenCV library to obtain the color segmentation using HSV method to obtain a black and white image where the surgical tool is white and the rest of the image is black. The average depth of objects calculated by the white image matched in the depth image.</p>
Full article ">Figure 2
<p>GG-CNN2 network architecture [<a href="#B2-robotics-12-00041" class="html-bibr">2</a>]. The GG-CNN2 directly generates a grasp pose G from grasp quality Q, grasp width W, and grasp angle <math display="inline"><semantics> <mi mathvariant="sans-serif">Φ</mi> </semantics></math>.</p>
Full article ">Figure 3
<p>The experimental setup with surgical tools is presented. In (<b>a</b>), the different shapes of 27 surgical tools for training and testing (seen objects) are shown. In (<b>b</b>), the 9 new surgical tools for validation (unseen objects) are displayed, and in (<b>c</b>), the experimental setup is provided. The experimental setup is composed of a UR5 manipulator, a Robotiq gripper, a vision sensor, and a surgical tool.</p>
Full article ">Figure 4
<p>On the left, the real and ground-truth grasping rectangles; on the right, the q value of the image.</p>
Full article ">Figure 5
<p>In (<b>a</b>), the GG-CNN2 calculates and visualizes four different grasping box rectangles of surgical tools in RGB and depth using the q value and angle. In (<b>b</b>), the real-time segmentation of a single surgical tool is matched to the tool in (<b>a</b>).</p>
Full article ">Figure 6
<p>The process of grasping multiple surgical tools in a cluttered environment. During the grasping performance, GG-CNN2 searched a new grasping rectangle, and with color-based segmentation on the scene in the first and the second rows. In the last row, the robotic arm and gripper grasp the tools based on the networks’ information.</p>
Full article ">Figure 7
<p>Failure examples: (<b>a</b>) no grasping box detection based on q value, (<b>b</b>) wrong segmentation as well as no grasping box detection, (<b>c</b>) wrong grasping box, and (<b>d</b>) tool falling in a cluttered environment.</p>
Full article ">
18 pages, 860 KiB  
Article
A Novel Evolving Type-2 Fuzzy System for Controlling a Mobile Robot under Large Uncertainties
by Ayad Al-Mahturi, Fendy Santoso, Matthew A. Garratt and Sreenatha G. Anavatti
Robotics 2023, 12(2), 40; https://doi.org/10.3390/robotics12020040 - 10 Mar 2023
Cited by 9 | Viewed by 2610
Abstract
This paper presents the development of a type-2 evolving fuzzy control system (T2-EFCS) to facilitate self-learning (either from scratch or from a certain predefined rule). Our system has two major learning stages, namely, structure learning and parameters learning. The structure phase does not [...] Read more.
This paper presents the development of a type-2 evolving fuzzy control system (T2-EFCS) to facilitate self-learning (either from scratch or from a certain predefined rule). Our system has two major learning stages, namely, structure learning and parameters learning. The structure phase does not require previous information about the fuzzy structure, and it can start the construction of its rules from scratch with only one initial fuzzy rule. The rules are then continuously updated and pruned in an online fashion to achieve the desired set point. For the phase of learning parameters, all adjustable parameters of the fuzzy system are tuned by using a sliding surface method, which is based on the gradient descent algorithm. This method is used to minimize the difference between the expected and actual signals. Our proposed control method is model-free and does not require prior knowledge of the plant’s dynamics or constraints. Instead, data-driven control utilizes artificial intelligence-based techniques, such as fuzzy logic systems, to learn the dynamics of the system and adapt to changes in the system, and account for complex interactions between different components. A robustness term is incorporated into the control effort to deal with external disturbances in the system. The proposed technique is applied to regulate the dynamics of a mobile robot in the presence of multiple external disturbances, demonstrating the robustness of the proposed control systems. A rigorous comparative study with respect to three different controllers is performed, where the outcomes illustrate the superiority of the proposed learning method as evidenced by lower RMSE values and fewer fuzzy parameters. Lastly, stability analysis of the proposed control method is conducted using the Lyapunov stability theory. Full article
(This article belongs to the Section Intelligent Robots and Mechatronics)
Show Figures

Figure 1

Figure 1
<p>Interval type-2 Gaussian membership function.</p>
Full article ">Figure 2
<p>Flowchart of our proposed T2-EFCS.</p>
Full article ">Figure 3
<p>Kinematic model of the differential-drive mobile robot.</p>
Full article ">Figure 4
<p>Performance of the proposed control system in nominal condition. (<b>a</b>) Desired vs. actual positions of the robot path in nominal condition. (<b>b</b>) Distance error evolution for different controllers. (<b>c</b>) Evolution of the fuzzy rules for the proposed T2-EFCS.</p>
Full article ">Figure 5
<p>Performance of the proposed system in the face of measurement noise. (<b>a</b>) Desired vs. actual positions of the robot path in the face of sensor noise. (<b>b</b>) Distance error evolution for different controllers in the face of measurement noise. (<b>c</b>) Evolution of the fuzzy rules for T2-EFCS in the face of measurement noise.</p>
Full article ">Figure 6
<p>Performance of the proposed system in the face of external disturbance. (<b>a</b>) Desired vs. actual positions of the robot path in the face of external disturbance. (<b>b</b>) Distance error evolution for different controllers in the face of external disturbance. (<b>c</b>) Evolution of the fuzzy rules for the proposed T2-EFCS in the face of external disturbance.</p>
Full article ">Figure 6 Cont.
<p>Performance of the proposed system in the face of external disturbance. (<b>a</b>) Desired vs. actual positions of the robot path in the face of external disturbance. (<b>b</b>) Distance error evolution for different controllers in the face of external disturbance. (<b>c</b>) Evolution of the fuzzy rules for the proposed T2-EFCS in the face of external disturbance.</p>
Full article ">
29 pages, 40327 KiB  
Article
A Multistage Framework for Autonomous Robotic Mapping with Targeted Metrics
by William Smith, Yongming Qin, Siddharth Singh, Hudson Burke, Tomonari Furukawa and Gamini Dissanayake
Robotics 2023, 12(2), 39; https://doi.org/10.3390/robotics12020039 - 9 Mar 2023
Viewed by 2795
Abstract
High-quality maps are pertinent to performing tasks requiring precision interaction with the environment. Current challenges with creating a high-precision map come from the need for both high pose accuracy and scan accuracy, and the goal of reliable autonomous performance of the task. In [...] Read more.
High-quality maps are pertinent to performing tasks requiring precision interaction with the environment. Current challenges with creating a high-precision map come from the need for both high pose accuracy and scan accuracy, and the goal of reliable autonomous performance of the task. In this paper, we propose a multistage framework to create a high-precision map of an environment which satisfies the targeted resolution and local accuracy by an autonomous mobile robot. The proposed framework consists of three steps. Each step is intended to aid in resolving the challenges faced by conventional approaches. In order to ensure the pose estimation is performed with high accuracy, a globally accurate coarse map of the environment is created using a conventional technique such as simultaneous localization and mapping or structure from motion with bundle adjustment. The high scan accuracy is ensured by planning a path for the robot to revisit the environment while maintaining a desired distance to all occupied regions. Since the map is to be created with targeted metrics, an online path replanning and pose refinement technique is proposed to autonomously achieve the metrics without compromising the pose and scan accuracy. The proposed framework was first validated on the ability to address the current challenges associated with accuracy through parametric studies of the proposed steps. The autonomous capability of the proposed framework was been demonstrated successfully in its use for a practical mission. Full article
(This article belongs to the Section AI in Robotics)
Show Figures

Figure 1

Figure 1
<p>The different mapping pipelines used to create a map of an unknown environment.</p>
Full article ">Figure 2
<p>The images and poses are used as inputs for NeRF [<a href="#B30-robotics-12-00039" class="html-bibr">30</a>] to generate a 3D rendering of the captured object. The standard set of images have a lower resolution in a unit area as compared to the images taken closer to surface. The differences in results are not too noticeable from the far viewpoint, but the differences can clearly be seen from the close viewpoint.</p>
Full article ">Figure 3
<p>The proposed multistage approach for autonomous high-precision mapping consists of three steps. Step 1 constructs a globally accurate coarse map to be used in the latter steps. Steps 1 and 2 use the coarse map to as prior information to focus on high precision. Step 2 plans a path for the robot observations to achieve the desired metrics. Step 3 adjusts the path to assure the metrics are met and the data are suitable for high precision mapping.</p>
Full article ">Figure 4
<p>Diagram of an indoor environment with a depth sensor making an observation on a planar wall.</p>
Full article ">Figure 5
<p>Unit viewing area for a sensor at a given distance, given by rotating the field of view and inscribing a square into the union of all of the fields of view.</p>
Full article ">Figure 6
<p>Resultant OGM (<b>a</b>) and UDM (<b>b</b>) created from the coarse map.</p>
Full article ">Figure 7
<p>Pixels (cyan color) selected to make up the offline path.</p>
Full article ">Figure 8
<p>Resulting 3D offline path of a room with two horizontal layers (red paths) and three vertical layers (green paths). The object located at <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>4</mn> <mo>,</mo> <mi>y</mi> <mo>=</mo> <mn>9</mn> </mrow> </semantics></math> does not reach the upper-most layer.</p>
Full article ">Figure 9
<p>Two examples of synthetic maps used for the validation of Step 2. The planned paths are indicated with the dotted lines around the objects, and the arrows show the sequential movement.</p>
Full article ">Figure 10
<p>Box plots of the computationaltime with respect to the map size and the object number in the environment.</p>
Full article ">Figure 11
<p>Known structures for testing the resolution and local accuracy of the proposed method. (<b>a</b>) Periodic surface, (<b>b</b>) non-periodic surface, (<b>c</b>) flat surface.</p>
Full article ">Figure 12
<p>Histogram of local resolution for the controlled environments.</p>
Full article ">Figure 13
<p>Histogram of local accuracy for the controlled environments.</p>
Full article ">Figure 14
<p>UGV used for map refinement and the environment of the real-world experiment. (<b>a</b>) UGV with manipulator, (<b>b</b>) Test environment obstacles and furniture.</p>
Full article ">Figure 15
<p>Resulting 3D offline path created during Step 2.</p>
Full article ">Figure 16
<p>Comparisons of the resulting NeRF [<a href="#B30-robotics-12-00039" class="html-bibr">30</a>] renderings using different techniques to estimate the poses of the images. (<b>a</b>) Poses generated with COLMAP [<a href="#B31-robotics-12-00039" class="html-bibr">31</a>]. (<b>b</b>) Poses generated with ICP localization using a rotating 3D LiDAR. (<b>c</b>) Poses generated using the proposed multistage framework.</p>
Full article ">Figure 16 Cont.
<p>Comparisons of the resulting NeRF [<a href="#B30-robotics-12-00039" class="html-bibr">30</a>] renderings using different techniques to estimate the poses of the images. (<b>a</b>) Poses generated with COLMAP [<a href="#B31-robotics-12-00039" class="html-bibr">31</a>]. (<b>b</b>) Poses generated with ICP localization using a rotating 3D LiDAR. (<b>c</b>) Poses generated using the proposed multistage framework.</p>
Full article ">Figure A1
<p>Coarse global map created using the conventional SLAM technique (RTAB-Map [<a href="#B42-robotics-12-00039" class="html-bibr">42</a>]).</p>
Full article ">Figure A2
<p>OGMs of the test environment at different layers. (<b>a</b>) Floor layer (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mo>−</mo> <mn>0.222</mn> </mrow> </semantics></math> m). (<b>b</b>) Ceiling layer (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mo>−</mo> <mn>0.0253</mn> </mrow> </semantics></math> m). (<b>c</b>) First layer (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mo>−</mo> <mn>1.29</mn> </mrow> </semantics></math> m). (<b>d</b>) Second layer (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mo>−</mo> <mn>0.514</mn> </mrow> </semantics></math> m). (<b>e</b>) Third layer (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>0.266</mn> </mrow> </semantics></math> m). (<b>f</b>) Fourth layer (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>1.04</mn> </mrow> </semantics></math> m).</p>
Full article ">Figure A3
<p>UDMs of the test environment at different layers. (<b>a</b>) First layer (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mo>−</mo> <mn>1.29</mn> </mrow> </semantics></math> m). (<b>b</b>) Second layer (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mo>−</mo> <mn>0.514</mn> </mrow> </semantics></math> m). (<b>c</b>) Third layer (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>0.266</mn> </mrow> </semantics></math> m). (<b>d</b>) Fourth layer (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>1.04</mn> </mrow> </semantics></math> m).</p>
Full article ">Figure A4
<p>Pixels selected to compose the path for the test environment at different layers. (<b>a</b>) First layer (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mo>−</mo> <mn>1.29</mn> </mrow> </semantics></math> m). (<b>b</b>) Second layer (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mo>−</mo> <mn>0.514</mn> </mrow> </semantics></math> m). (<b>c</b>) Third layer (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>0.266</mn> </mrow> </semantics></math> m). (<b>d</b>) Fourth layer (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>1.04</mn> </mrow> </semantics></math> m).</p>
Full article ">Figure A5
<p>The paths planned for the test environment at different layers. (<b>a</b>) Floor layer (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mo>−</mo> <mn>0.222</mn> </mrow> </semantics></math> m). (<b>b</b>) Ceiling layer (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mo>−</mo> <mn>0.0253</mn> </mrow> </semantics></math> m). (<b>c</b>) First layer (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mo>−</mo> <mn>1.29</mn> </mrow> </semantics></math> m). (<b>d</b>) Second layer (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mo>−</mo> <mn>0.514</mn> </mrow> </semantics></math> m). (<b>e</b>) Third layer (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>0.266</mn> </mrow> </semantics></math> m). (<b>f</b>) Fourth layer (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>1.04</mn> </mrow> </semantics></math> m).</p>
Full article ">Figure A6
<p>Practical experiment in a machine room with narrow spaces. (<b>a</b>) Image of machine room. (<b>b</b>) Map created using the proposed framework by registering the point clouds.</p>
Full article ">Figure A7
<p>Practical experiment in a robotics laboratory with long horizontal spaces. (<b>a</b>) Image of robotics laboratory, (<b>b</b>) Map created using the proposed framework by registering the point clouds.</p>
Full article ">Figure A8
<p>Practical experiment in a nuclear reactor’s silo with tall vertical spaces. (<b>a</b>) Image of the nuclear reactor’s silo. (<b>b</b>) Map created using the proposed framework by registering the point clouds.</p>
Full article ">
26 pages, 7722 KiB  
Article
Inverse Kinematic Solver Based on Bat Algorithm for Robotic Arm Path Planning
by Mohamed Slim, Nizar Rokbani, Bilel Neji, Mohamed Ali Terres and Taha Beyrouthy
Robotics 2023, 12(2), 38; https://doi.org/10.3390/robotics12020038 - 9 Mar 2023
Cited by 6 | Viewed by 3562
Abstract
The bat algorithm (BA) is a nature inspired algorithm which is mimicking the bio-sensing characteristics of bats, known as echolocation. This paper suggests a Bat-based meta-heuristic for the inverse kinematics problem of a robotic arm. An intrinsically modified BA is proposed to find [...] Read more.
The bat algorithm (BA) is a nature inspired algorithm which is mimicking the bio-sensing characteristics of bats, known as echolocation. This paper suggests a Bat-based meta-heuristic for the inverse kinematics problem of a robotic arm. An intrinsically modified BA is proposed to find an inverse kinematics (IK) solution, respecting a minimum variation of the joints’ elongation from the initial configuration of the robot manipulator to the proposed new pause position. The proposed method is called IK-BA, it stands for a specific bat algorithm dedicated to robotic-arms’ inverse geometric solution, and where the elongation control mechanism is embedded in bat agents update equations. Performances analysis and comparatives to related state of art meta-heuristics solvers showed the effectiveness of the proposed IK bat solver for single point IK planning as well as for geometric path planning, which may have several industrial applications. IK-BA was also applied to a real robotic arm with a spherical wrist as a proof of concept and pertinence of the proposed approach. Full article
Show Figures

Figure 1

Figure 1
<p>Relation between forward and inverse kinematics.</p>
Full article ">Figure 2
<p>Wrist’s position relative to the end-effector’s position.</p>
Full article ">Figure 3
<p>Flowchart of the proposed methodology.</p>
Full article ">Figure 4
<p>Kinematic scheme of KUKA LBR iiwa 14 R820.</p>
Full article ">Figure 5
<p>Helical trajectory for the path tracking test.</p>
Full article ">Figure 6
<p>Average position error in path tracking.</p>
Full article ">Figure 7
<p>Average angles variation from point-to-point in path tracking.</p>
Full article ">Figure 8
<p>Average computing time for IK solutions from point-to-point in path tracking.</p>
Full article ">Figure 9
<p>Generated joint angles along the trajectory using (<b>a</b>) IK-BA (<b>b</b>) BA (<b>c</b>) DE (<b>d</b>) PSO (<b>e</b>) K-ABC (<b>f</b>) MO-PSO.</p>
Full article ">Figure 9 Cont.
<p>Generated joint angles along the trajectory using (<b>a</b>) IK-BA (<b>b</b>) BA (<b>c</b>) DE (<b>d</b>) PSO (<b>e</b>) K-ABC (<b>f</b>) MO-PSO.</p>
Full article ">Figure 9 Cont.
<p>Generated joint angles along the trajectory using (<b>a</b>) IK-BA (<b>b</b>) BA (<b>c</b>) DE (<b>d</b>) PSO (<b>e</b>) K-ABC (<b>f</b>) MO-PSO.</p>
Full article ">Figure 10
<p>Box plot of the ANOVA test of average position error.</p>
Full article ">Figure 11
<p>Box plot of the ANOVA test of average angles variation.</p>
Full article ">Figure 12
<p>Box plot of the ANOVA test of average computational time.</p>
Full article ">Figure 13
<p>Dobot magician robot.</p>
Full article ">Figure 14
<p>Actual trajectory and end-effector’s trajectory.</p>
Full article ">Figure 15
<p>Dobot magician robot and circular trajectory.</p>
Full article ">Figure 16
<p>Generated joint angles using IK-BA along the trajectory.</p>
Full article ">Figure 17
<p>Position error between the robot’s end-effector and the target point of the trajectory.</p>
Full article ">Figure 18
<p>Average position error curve along 1000 iterations.</p>
Full article ">
30 pages, 7237 KiB  
Review
Research Perspectives in Collaborative Assembly: A Review
by Thierry Yonga Chuengwa, Jan Adriaan Swanepoel, Anish Matthew Kurien, Mukondeleli Grace Kanakana-Katumba and Karim Djouani
Robotics 2023, 12(2), 37; https://doi.org/10.3390/robotics12020037 - 7 Mar 2023
Cited by 7 | Viewed by 5766
Abstract
In recent years, the emergence of Industry 4.0 technologies has introduced manufacturing disruptions that necessitate the development of accompanying socio-technical solutions. There is growing interest for manufacturing enterprises to embrace the drivers of the Smart Industry paradigm. Among these drivers, human–robot physical co-manipulation [...] Read more.
In recent years, the emergence of Industry 4.0 technologies has introduced manufacturing disruptions that necessitate the development of accompanying socio-technical solutions. There is growing interest for manufacturing enterprises to embrace the drivers of the Smart Industry paradigm. Among these drivers, human–robot physical co-manipulation of objects has gained significant interest in the literature on assembly operations. Motivated by the requirement for human dyads between the human and the robot counterpart, this study investigates recent literature on the implementation methods of human–robot collaborative assembly scenarios. Using a combination of strings, the researchers performed a systematic review search, sourcing 451 publications from various databases (Science Direct (253), IEEE Xplore (49), Emerald (32), PudMed (21) and SpringerLink (96)). A coding assignment in Eppi-Reviewer helped screen the literature based on ‘exclude’ and ‘include’ criteria. The final number of full-text publications considered in this literature review is 118 peer-reviewed research articles published up until September 2022. The findings anticipate that research publications in the fields of human–robot collaborative assembly will continue to grow. Understanding and modeling the human interaction and behavior in robot co-assembly is crucial to the development of future sustainable smart factories. Machine vision and digital twins modeling begin to emerge as promising interfaces for the evaluation of tasks distribution strategies for mitigating the actual human ergonomic and safety risks in collaborative assembly solutions design. Full article
(This article belongs to the Special Issue Human Factors in Human–Robot Interaction)
Show Figures

Figure 1

Figure 1
<p>Research framework.</p>
Full article ">Figure 2
<p>Flow diagram of the selection process.</p>
Full article ">Figure 3
<p>Reviewer interface.</p>
Full article ">Figure 4
<p>Distribution of enabling concepts identified in the literature for an effective implementation of collaborative assembly in manufacturing.</p>
Full article ">Figure 5
<p>Task allocation between the human and robot.</p>
Full article ">Figure 6
<p>Research trends in fatigue management in collaborative assembly (up to September 2022).</p>
Full article ">Figure 7
<p>Human–robot control states loop. The interaction controller generates the optimized task allocation to the robot, as per changes in human movements.</p>
Full article ">Figure 8
<p>Sensors communication in HRC.</p>
Full article ">Figure 9
<p>Optimization planning of task allocation for human–robot collaboration. Multi-criteria optimization for minimizing the human fatigue, energy consumption and path and maximizing efficiency.</p>
Full article ">Figure 10
<p>Components of a virtual HRC assembly design. CAD models of humans, collaborative robots and tools are imported into an immersive environment where interaction is enabled through sensors. Such environments allow for the relative safety of testing multiple interaction scenarios prior to physical set-ups.</p>
Full article ">Figure 11
<p>(<b>a</b>) Co-assembly areas of the human (orange) and robot (blue), (<b>b</b>) Biomechanical modeling of upper-body motion patterns. The motion patterns of the upper-body limbs are cataloged, and the visual controller can evaluate the deviations from the prescribed path(s) in both space and time.</p>
Full article ">Figure 12
<p>Virtual collaborative assembly design. The development is performed at the X-Reality Lab, RMCERI, Department of Industrial Engineering, TUT. The protocols are designed to enable the co-operator to view the ergonomic characteristics of the assembly task in terms of the contact points in space between the human, the robot and the product, the load variance at various execution times and the aging energy level.</p>
Full article ">Figure 13
<p>Research classification.</p>
Full article ">Figure 14
<p>Cluster (polynomial) analysis of key research areas of HRC in assembly.</p>
Full article ">Figure 15
<p>Computer vision for the tracking of joint motion patterns in collaborative assembly design.</p>
Full article ">
18 pages, 2093 KiB  
Article
Inverse Kinematics of a Class of 6R Collaborative Robots with Non-Spherical Wrist
by Luca Carbonari, Matteo-Claudio Palpacelli and Massimo Callegari
Robotics 2023, 12(2), 36; https://doi.org/10.3390/robotics12020036 - 3 Mar 2023
Cited by 3 | Viewed by 5348
Abstract
The spread of cobotsin common industrial practice has led constructors to prefer the development of collaborative features that are necessary to prevent injuries to operators over the realization of simple kinematic structures for which the joints-to-workspace mapping is well known. An example is [...] Read more.
The spread of cobotsin common industrial practice has led constructors to prefer the development of collaborative features that are necessary to prevent injuries to operators over the realization of simple kinematic structures for which the joints-to-workspace mapping is well known. An example is given by the replacement in serial robots of spherical wrists with safer solutions, where the danger of crushing and shearing is intrinsically avoided. Despite this tendency, the kinematic map between actuated joints and the Cartesian workspace remains of paramount importance for robot analysis and programming, deserving the attention of the research community. This paper proposes a closed-form solution for the inverse kinematics of a class of 6R robotic arms with six degrees of freedom and non-spherical wrists. The solutions are worked out by a single polynomial, of minimum degree, in terms of one of the positioning parameters chosen for the description of the robot posture. The roots of such a polynomial are then back-substituted to determine all the remaining unknowns. A numerical example is finally shown to verify the validity of the proposed implementation for a commercial collaborative robot. Full article
(This article belongs to the Special Issue Kinematics and Robot Design V, KaRD2022)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Collaborative Robot FANUC CRX-10iA/L and (<b>b</b>) its kinematic architecture.</p>
Full article ">Figure 2
<p>(<b>a</b>) Frames attached to the robot bodies and (<b>b</b>) details of their lengths.</p>
Full article ">Figure 3
<p>Representation of the 16 configurations resulting from the inverse kinematics problem solution.</p>
Full article ">
11 pages, 2347 KiB  
Article
A Simple Controller for Omnidirectional Trotting of Quadrupedal Robots: Command Following and Waypoint Tracking
by Pranav A. Bhounsule and Chun-Ming Yang
Robotics 2023, 12(2), 35; https://doi.org/10.3390/robotics12020035 - 28 Feb 2023
Cited by 4 | Viewed by 2840
Abstract
For autonomous legged robots to be deployed in practical scenarios, they need to perform perception, motion planning, and locomotion control. Since robots have limited computing capabilities, it is important to realize locomotion control with simple controllers that have modest calculations. The goal of [...] Read more.
For autonomous legged robots to be deployed in practical scenarios, they need to perform perception, motion planning, and locomotion control. Since robots have limited computing capabilities, it is important to realize locomotion control with simple controllers that have modest calculations. The goal of this paper is to create computational simple controllers for locomotion control that can free up computational resources for more demanding computational tasks, such as perception and motion planning. The controller consists of a leg scheduler for sequencing a trot gait with a fixed step time; a reference trajectory generator for the feet in the Cartesian space, which is then mapped to the joint space using an analytical inverse; and a joint controller using a combination of feedforward torques based on static equilibrium and feedback torque. The resulting controller enables velocity command following in the forward, sideways, and turning directions. With these three velocity command following-modes, a waypoint tracking controller is developed that can track a curve in global coordinates using feedback linearization. The command following and waypoint tracking controllers are demonstrated in simulation and on hardware. Full article
(This article belongs to the Section Intelligent Robots and Mechatronics)
Show Figures

Figure 1

Figure 1
<p>Flowchart for the control of a single leg. The highlighted variables (in yellow) are the control variables: the torso speed in the fore–aft direction (<math display="inline"><semantics> <msub> <mi>v</mi> <mi>x</mi> </msub> </semantics></math>), the torso speed in the lateral direction (<math display="inline"><semantics> <msub> <mi>v</mi> <mi>y</mi> </msub> </semantics></math>), the angular velocity of the torso (<math display="inline"><semantics> <mi>ω</mi> </semantics></math>), the leg height (<math display="inline"><semantics> <msub> <mo>ℓ</mo> <mi>z</mi> </msub> </semantics></math>), the ground clearance <math display="inline"><semantics> <msub> <mi>h</mi> <mrow> <mi>c</mi> <mi>l</mi> </mrow> </msub> </semantics></math>, and the step time <math display="inline"><semantics> <msub> <mi>t</mi> <mi>s</mi> </msub> </semantics></math>. The highlighted blocks (in blue) are computed once per step (<math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <msub> <mi>t</mi> <mi>s</mi> </msub> </mrow> </semantics></math>), while the others are set at 500 Hz. IMU stands for inertial measurement unit, and it provides the torso orientation as a quaternion (<span class="html-italic">quat</span>). Each of these blocks are elaborated in the low-level control section.</p>
Full article ">Figure 2
<p>Kinematics of a single leg.</p>
Full article ">Figure 3
<p>Point-mass model to compute joint torques. The ground reaction forces are denoted by <span class="html-italic">F</span> and the location of the foot with respect to the center of mass of the torso is denoted by <span class="html-italic">r</span>.</p>
Full article ">Figure 4
<p>Command following in simulation: (<b>a</b>,<b>b</b>) simultaneously command to follow <math display="inline"><semantics> <msub> <mi>v</mi> <mi>x</mi> </msub> </semantics></math> and <math display="inline"><semantics> <mi>ω</mi> </semantics></math> and (<b>c</b>,<b>d</b>) simultaneously commanded to follow <math display="inline"><semantics> <msub> <mi>v</mi> <mi>x</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>v</mi> <mi>y</mi> </msub> </semantics></math>.</p>
Full article ">Figure 5
<p>Waypoint tracking of the robot: (<b>a</b>,<b>b</b>) show tracking for changing the yaw reference in simulations and experiments, respectively, and (<b>c</b>,<b>d</b>) show tracking for a constant yaw reference in simulations and experiments, respectively. Please see video [<a href="#B36-robotics-12-00035" class="html-bibr">36</a>].</p>
Full article ">Figure 6
<p>A video that shows a comparison of the simulation (inset) with the hardware results (see video [<a href="#B36-robotics-12-00035" class="html-bibr">36</a>]).</p>
Full article ">
19 pages, 8090 KiB  
Article
Stastaball: Design and Control of a Statically Stable Ball Robot
by Luca Fornarelli, Jack Young, Thomas McKenna, Ebenezer Koya and John Hedley
Robotics 2023, 12(2), 34; https://doi.org/10.3390/robotics12020034 - 28 Feb 2023
Cited by 2 | Viewed by 3455
Abstract
Ballbots are omnidirectional robots in which a robot chassis is built and balanced on top of a ball, thereby allowing for a highly manoeuvrable platform on a planar surface. However, the stability of such robots is performed dynamically with the use of a [...] Read more.
Ballbots are omnidirectional robots in which a robot chassis is built and balanced on top of a ball, thereby allowing for a highly manoeuvrable platform on a planar surface. However, the stability of such robots is performed dynamically with the use of a suitable controller, and thus, power to such robots must be continually maintained. In this paper, a novel approach to the ballbot design is presented in which unpowered static stability is maintained mechanically by a suitable choice of position for the centre of mass of the robot. The simulations of the design and a built prototype evidence the feasibility of such an approach, demonstrating static stability and performance parameters of three degrees of freedom movement, linear speeds of 0.05 m/s, rotation angular speed of 1 rad/s and the ability to traverse inclines up to 3°. Limitations in performance were predominantly due to compressibility of the ball used and power of the motors. Areas for future development to address these issues are suggested. Full article
(This article belongs to the Special Issue Recent Trends and Advances in Mechanism Design and Robotics)
Show Figures

Figure 1

Figure 1
<p>The operational principle of a statically stable ball robot. As the centre of mass of the robot (CM) is lower than the centre of rotation (CR) of the ball, any induced tilt onto the robot will result in a restoring force that corrects for this tilt.</p>
Full article ">Figure 2
<p>The Stastaball robot as simulated in Webots. A solid sphere of mass 10 kg was used as the central ball. Frame mass was 1 kg and offset downwards from ball centre by 0.08 m.</p>
Full article ">Figure 3
<p>Response of the stationary robot when a pitch and roll disturbance is introduced in the first 1.3 s.</p>
Full article ">Figure 4
<p>Rotation of the robot frame as the robot translates in the x direction, the dotted horizontal lines show the maximum pitch angle for the robot. At 0.05 m/s (shown in (<b>A</b>)), oscillations are seen in the pitch angle. At 0.1 m/s (shown in (<b>B</b>)), the excessive pitching causes the frame to collide with the ground at 5.0 s, causing movement in the roll and yaw.</p>
Full article ">Figure 5
<p>Recovery of the robot to a disturbance at 2 s using a PID controller to correct frame orientation. The PID controller was activated at 2.7 s.</p>
Full article ">Figure 6
<p>Response of the robot to an initial disturbance at 2 s followed by a command at 3.1 s to translate towards a heading of −135° with PID implemented.</p>
Full article ">Figure 7
<p>Response of the robot to a command to move forward with a heading of −135° with PID implemented. Initial heading is 0°. Sudden changes in yaw causes disturbances in pitch and roll as seen at 31.2 s.</p>
Full article ">Figure 8
<p>CAD drawing of the robot viewed from (<b>A</b>) above and (<b>B</b>) below from where the drive wheels are clearly visible. Six omnidirectional support wheels on the halo of the robot were incorporated to help centralize the ball within the frame.</p>
Full article ">Figure 9
<p>Schematic of the communication structure of the robot.</p>
Full article ">Figure 10
<p>Image of the constructed prototype of Stastaball showing a primary Arduino (<b>A</b>); a secondary Arduino (<b>B</b>); a driven wheel (<b>C</b>), which is partially visible behind the halo; passive support wheels on the frame (<b>D</b>); and halo (<b>E</b>); one of the power banks (<b>F</b>); ultrasonic sensors (<b>G</b>); and camera (<b>H</b>). Upper inset shows zoomed view of primary Arduino containing Bluetooth module (<b>I</b>) and IMU module (not visible in this image). Lower inset shows zoomed view of one of the secondary Arduinos with voltage regulator (<b>J</b>) and motor driver (<b>K</b>) mounted onto an Arduino shield.</p>
Full article ">Figure 11
<p>Static stability test for the robot. Induced displacements in the pitch and then roll angles demonstrate damped oscillations back to the static position.</p>
Full article ">Figure 12
<p>Response of the robot in the pitch, roll and yaw axes when (<b>A</b>) being driven at 0.05 m/s in the x direction and (<b>B</b>) programmed to rotate on the spot at varying yaw rates.</p>
Full article ">Figure 12 Cont.
<p>Response of the robot in the pitch, roll and yaw axes when (<b>A</b>) being driven at 0.05 m/s in the x direction and (<b>B</b>) programmed to rotate on the spot at varying yaw rates.</p>
Full article ">Figure A1
<p>(<b>A</b>) Side view of the robot showing placement of one of the drive wheels. (<b>B</b>) Top view of the robot showing placement of the three drive wheels. The force imparted on the ball by the wheels, shown by the arrows, results in the frame rotating in the opposite direction (clockwise for the case shown).</p>
Full article ">Figure A2
<p>The moment produced by the driving wheels results in a tilting of the frame, which is then balanced by the moment produced from the gravitational force on the offset centre of mass.</p>
Full article ">
17 pages, 14981 KiB  
Article
Virtual Sensor-Based Geometry Prediction of Complex Sheet Metal Parts Formed by Robotic Rollforming
by Tina Abdolmohammadi, Valentin Richter-Trummer, Antje Ahrens, Karsten Richter, Alaa Alibrahim and Markus Werner
Robotics 2023, 12(2), 33; https://doi.org/10.3390/robotics12020033 - 22 Feb 2023
Cited by 4 | Viewed by 2400
Abstract
Sheet metal parts can often replace milled components, strongly improving the buy-to-fly ratio in the aeronautical sector. However, the sheet metal forming of complex parts traditionally requires expensive tooling, which is usually prohibitive for low manufacturing rates. To achieve precise parts, non-productive and [...] Read more.
Sheet metal parts can often replace milled components, strongly improving the buy-to-fly ratio in the aeronautical sector. However, the sheet metal forming of complex parts traditionally requires expensive tooling, which is usually prohibitive for low manufacturing rates. To achieve precise parts, non-productive and cost-intensive geometry straightening processes are additionally often required after forming. Rollforming is a possible technology for producing profiles at large rates. For low manufacturing rates, robotic rollforming can be an interesting option, significantly reducing investment at the cost of higher manufacturing times while keeping a high process flexibility. Forming is performed incrementally by a single roller set moved by the robot along predefined bending curves. The present work’s contribution to the overall solution is the development of an intelligent algorithm to calculate geometry after a robotic rollforming process based on process reaction forces. This information is required for in-process geometric distortion correction. Reaction forces and torques are acquired during the process, and geometry is calculated based on artificial intelligence (AI) applied to that information. The present paper describes the AI development for this virtual geometry sensing system. Full article
(This article belongs to the Section Industrial Robots and Automation)
Show Figures

Figure 1

Figure 1
<p>iRoRoFo-process sequence.</p>
Full article ">Figure 2
<p>Cell setup used for the experiments.</p>
Full article ">Figure 3
<p>Scheme of robotic end-effector.</p>
Full article ">Figure 4
<p>Scheme of the defined specimen.</p>
Full article ">Figure 5
<p>Scheme of forming path for all increments.</p>
Full article ">Figure 6
<p>Extracted geometry points for one cutting plane with regression lines at x = 434 mm.</p>
Full article ">Figure 7
<p>Force measurement data preprocessing scheme.</p>
Full article ">Figure 8
<p>Measured angles along the bending edge.</p>
Full article ">Figure 9
<p>Repeatability of measured forces in X direction along the bending edge.</p>
Full article ">Figure 10
<p>Structure of input and output data.</p>
Full article ">Figure 11
<p>Geometry prediction results with random forest regression.</p>
Full article ">Figure 12
<p>Geometry prediction results with support vector regression.</p>
Full article ">Figure 13
<p>Results of hyperparameter tuning.</p>
Full article ">Figure 14
<p>Slice plot of hyperparameters.</p>
Full article ">Figure 15
<p>Parallel coordinate plot of hyperparameters.</p>
Full article ">Figure 16
<p>Geometry prediction results with the neural network.</p>
Full article ">Figure 17
<p>Zoom into the geometry prediction results shown in <a href="#robotics-12-00033-f016" class="html-fig">Figure 16</a>.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop