Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,468)

Search Parameters:
Keywords = mobile robots

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 5643 KiB  
Article
Revolutionizing Palm Dates Harvesting with Multirotor Flying Vehicles
by Hanafy M. Omar and Saad M. S. Mukras
Appl. Sci. 2024, 14(22), 10529; https://doi.org/10.3390/app142210529 - 15 Nov 2024
Viewed by 109
Abstract
This study addresses the challenges of traditional date palm harvesting, which is often labor-intensive and hazardous, by introducing an innovative solution utilizing multirotor flying vehicles (MRFVs). Unlike conventional methods such as hydraulic lifts and ground-based robotic manipulators, the proposed system integrates a quadrotor [...] Read more.
This study addresses the challenges of traditional date palm harvesting, which is often labor-intensive and hazardous, by introducing an innovative solution utilizing multirotor flying vehicles (MRFVs). Unlike conventional methods such as hydraulic lifts and ground-based robotic manipulators, the proposed system integrates a quadrotor equipped with a winch and a suspended robotic arm with a precision saw. Controlled remotely via a mobile application, the quadrotor navigates to targeted branches on the date palm tree, where the robotic arm, guided by live video feedback from integrated cameras, accurately severs the branches. Extensive testing in a controlled environment demonstrates the system’s potential to significantly improve harvesting efficiency, safety, and cost-effectiveness. This approach offers a promising alternative to traditional harvesting methods, providing a scalable solution for date palm cultivation, particularly in regions with large-scale plantations. This work marks a significant advancement in the field of agricultural automation, offering a safer, more efficient method for harvesting date palms and contributing to the growing body of knowledge in automated farming technologies. Full article
Show Figures

Figure 1

Figure 1
<p>Harvesting date palms by climbing trees.</p>
Full article ">Figure 2
<p>Hydraulic lift for palm tree harvesting.</p>
Full article ">Figure 3
<p>Robotic arm for date harvesting [<a href="#B16-applsci-14-10529" class="html-bibr">16</a>].</p>
Full article ">Figure 4
<p>Developed system.</p>
Full article ">Figure 5
<p>Robotic arm.</p>
Full article ">Figure 6
<p>The designed winch.</p>
Full article ">Figure 7
<p>Top view of the designed quadrotor flying vehicle.</p>
Full article ">Figure 8
<p>Connections of the RPI fixed on the robotic arm.</p>
Full article ">Figure 9
<p>Connections of the RPI fixed on the quadrotor.</p>
Full article ">Figure 10
<p>Screenshot of the application main screen during the operation.</p>
Full article ">Figure 11
<p>Testing the system in the lab using the testbed.</p>
Full article ">Figure 12
<p>Quadrotor attitude angles.</p>
Full article ">Figure 13
<p>Quadrotor speed in the longitudinal direction.</p>
Full article ">Figure 14
<p>Quadrotor speed in the lateral direction.</p>
Full article ">Figure 15
<p>Quadrotor speed in the vertical direction.</p>
Full article ">
15 pages, 8780 KiB  
Article
A Lightweight, Centralized, Collaborative, Truncated Signed Distance Function-Based Dense Simultaneous Localization and Mapping System for Multiple Mobile Vehicles
by Haohua Que, Haojia Gao, Weihao Shan, Xinghua Yang and Rong Zhao
Sensors 2024, 24(22), 7297; https://doi.org/10.3390/s24227297 - 15 Nov 2024
Viewed by 130
Abstract
Simultaneous Localization And Mapping (SLAM) algorithms play a critical role in autonomous exploration tasks requiring mobile robots to autonomously explore and gather information in unknown or hazardous environments where human access may be difficult or dangerous. However, due to the resource-constrained nature of [...] Read more.
Simultaneous Localization And Mapping (SLAM) algorithms play a critical role in autonomous exploration tasks requiring mobile robots to autonomously explore and gather information in unknown or hazardous environments where human access may be difficult or dangerous. However, due to the resource-constrained nature of mobile robots, they are hindered from performing long-term and large-scale tasks. In this paper, we propose an efficient multi-robot dense SLAM system that utilizes a centralized structure to alleviate the computational and memory burdens on the agents (i.e. mobile robots). To enable real-time dense mapping of the agent, we design a lightweight and accurate dense mapping method. On the server, to find correct loop closure inliers, we design a novel loop closure detection method based on both visual and dense geometric information. To correct the drifted poses of the agents, we integrate the dense geometric information along with the trajectory information into a multi-robot pose graph optimization problem. Experiments based on pre-recorded datasets have demonstrated our system’s efficiency and accuracy. Real-world online deployment of our system on the mobile vehicles achieved a dense mapping update rate of ∼14 frames per second (fps), a onboard mapping RAM usage of ∼3.4%, and a bandwidth usage of ∼302 KB/s with a Jetson Xavier NX. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the SLAM system architecture. Each robotic agent (e.g., a mobile robot) runs real-time visual inertial odometry, maintaining a local TSDF map of limited size and a communication module to send data to the server. The server performs non-time-critical, memory-heavy, and computationally expensive tasks: map management, place recognition, pose graph optimization, and map fusion.</p>
Full article ">Figure 2
<p>Structure of the global pose graph: the red circles indicate submap nodes (poses), the black lines indicate odometry constraints, the green lines indicate registration constraints, and the red lines indicate loop closure constraints.</p>
Full article ">Figure 3
<p>Comparisons of the TSDF mapping performance in terms of TSDF update time, TSDF error, and ESDF error utilizing the Cow&amp;Lady Dataset [<a href="#B1-sensors-24-07297" class="html-bibr">1</a>] and the EuRoC Dataset [<a href="#B2-sensors-24-07297" class="html-bibr">2</a>]. We compare each method under different voxel sizes.</p>
Full article ">Figure 4
<p>Collaborative dense mapping results of two agents utilizing the EuRoC Dataset [<a href="#B2-sensors-24-07297" class="html-bibr">2</a>]. (<b>a</b>) Dense mapping result of agent 1 in MH_01 sequence, (<b>b</b>) dense mapping result of agent 2 in MH_03 sequence, (<b>c</b>) merged global map of MH_01 and MH_03.</p>
Full article ">Figure 5
<p>Real-world centralized collaborative multi-robot dense SLAM system. (<b>a</b>) The agent, which is a resource-constrained mobile robot. (<b>b</b>) The whole system with two agents and one server.</p>
Full article ">Figure 6
<p>Online collaborative SLAM with two agents (mobile robots) utilizing a centralized architecture in a large office building. The above two pictures depict the SLAM results of a single agent. The right picture illustrates the collaborative SLAM result; the yellow line and the green line represent the trajectories of Agent 1 and Agent 2.</p>
Full article ">Figure 7
<p>Online collaborative SLAM with two agents (mobile robots) in the same indoor room with obstacles. The yellow line and the green line represent the trajectories of Agent 1 and Agent 2.</p>
Full article ">
21 pages, 9035 KiB  
Article
Design and Implementation of an AI-Based Robotic Arm for Strawberry Harvesting
by Chung-Liang Chang and Cheng-Chieh Huang
Agriculture 2024, 14(11), 2057; https://doi.org/10.3390/agriculture14112057 - 15 Nov 2024
Viewed by 137
Abstract
This study presents the design and implementation of a wire-driven, multi-joint robotic arm equipped with a cutting and gripping mechanism for harvesting delicate strawberries, with the goal of reducing labor and costs. The arm is mounted on a lifting mechanism and linked to [...] Read more.
This study presents the design and implementation of a wire-driven, multi-joint robotic arm equipped with a cutting and gripping mechanism for harvesting delicate strawberries, with the goal of reducing labor and costs. The arm is mounted on a lifting mechanism and linked to a laterally movable module, which is affixed to the tube cultivation shelf. The trained deep learning model can instantly detect strawberries, identify optimal picking points, and estimate the contour area of fruit while the mobile platform is in motion. A two-stage fuzzy logic control (2s-FLC) method is employed to adjust the length of the arm and bending angle, enabling the end of the arm to approach the fruit picking position. The experimental results indicate a 90% accuracy in fruit detection, an 82% success rate in harvesting, and an average picking time of 6.5 s per strawberry, reduced to 5 s without arm recovery time. The performance of the proposed system in harvesting strawberries of different sizes under varying lighting conditions is also statistically analyzed and evaluated in this paper. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic of joint arm swing (the black dotted line indicates the trajectory of the arm swing).</p>
Full article ">Figure 2
<p>Structure of the multi-jointed robotic arm (<b>center</b>); base of the arm (<b>top left</b>) and end joint (<b>bottom left</b>); internal hoses and thin wires within the arm (<b>right</b>).</p>
Full article ">Figure 3
<p>Design of clamp and cutting tool. (<b>a</b>) The structure of clamping and cutting tools; (<b>b</b>) clamp in the open state; (<b>c</b>) clamp in the closed state; (<b>d</b>) prototype of the two sets of clamps; (<b>e</b>) mounting of the clamp on the joint arm (with the upper clamp in the open state and the lower clamp in the closed state); (<b>f</b>) the clamp in action for picking strawberries (the nozzle is installed inside the tube, bottom left).</p>
Full article ">Figure 4
<p>Clamp cutting part with two blades and gripping part with two foam pads.</p>
Full article ">Figure 5
<p>Hydroponic fruit picking platform: ➀ hydroponic PVC pipe and aluminum extrusion track; ➁ pulley module; ➂ module for raising and lowering the arm; ➃ arm with two camera units.</p>
Full article ">Figure 6
<p>Robotic arm and lifting module: (<b>a</b>) prototype of lifting module and mechanism; (<b>b</b>–<b>d</b>) show the actions of extending the robotic arm.</p>
Full article ">Figure 7
<p>Process of creating the object model.</p>
Full article ">Figure 8
<p>Side view of fruit models in three different sizes, labeled Size 1 (<b>a</b>), Size 2 (<b>b</b>), and Size 3 (<b>c</b>).</p>
Full article ">Figure 9
<p>Coordinate configuration of arm and strawberry.</p>
Full article ">Figure 10
<p>The block diagram of 2s-FLC system.</p>
Full article ">Figure 11
<p>Input and output variable fuzzification for FLC 1 and FLC 2. (<b>a</b>) <math display="inline"><semantics> <msup> <mi mathvariant="normal">v</mi> <mo>′</mo> </msup> </semantics></math> for input of FLC 1; (<b>b</b>) <math display="inline"><semantics> <mi>a</mi> </semantics></math> for input of FLC 1 and FLC 2; (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">PWM</mi> </mrow> <mi mathvariant="normal">Z</mi> </msub> </mrow> </semantics></math> for output of FLC 1; (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">PWM</mi> </mrow> <mi mathvariant="normal">Z</mi> </msub> </mrow> </semantics></math> for input of FLC 2; (<b>e</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">PWM</mi> </mrow> <mi mathvariant="normal">Y</mi> </msub> </mrow> </semantics></math> for output of FLC 2.</p>
Full article ">Figure 12
<p>Example of fuzzy inference and defuzzification; fuzzy inference results when <math display="inline"><semantics> <msup> <mi mathvariant="normal">v</mi> <mo>′</mo> </msup> </semantics></math> = 600 and <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>=</mo> <msup> <mn>10</mn> <mn>5</mn> </msup> <mrow> <mo>(</mo> <mi>pixel</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> (FLC 1).</p>
Full article ">Figure 13
<p>Fuzzy inference surfaces of FLC 1 (<b>left</b>) and FLC 2 (<b>right</b>).</p>
Full article ">Figure 14
<p>Strawberry identification results.</p>
Full article ">Figure 15
<p>Bending test of the jointed arm. (<b>a</b>) Simulated joint arm bending using the Simulink tool, (<b>b</b>) arm bending without the plastic tube inserted, and (<b>c</b>) arm bending with the plastic tube inserted.</p>
Full article ">Figure 16
<p>Swing trajectory of the joint arm. (<b>a</b>) Bending trajectories of PVC plastic pipes with insertion (blue line) and without insertion (black dashed line); (<b>b</b>) Relationship between joint arm lengths and swing trajectories (each color represents the swing trajectory for a different joint arm length).</p>
Full article ">Figure 17
<p>Average time per fruit for single fruit picking operation.</p>
Full article ">Figure 18
<p>Snapshot of the experimental site (strawberry models of different sizes hung on one side).</p>
Full article ">Figure 19
<p>Performance comparison of the detection model at various times.</p>
Full article ">Figure 20
<p>Strawberry picking experiment site (with strawberry models of different sizes hanging on both sides).</p>
Full article ">Figure 21
<p>Snapshots of the joint arm grasping a strawberry. (<b>a</b>) The joint arm is lowered and aligned with the target; (<b>b</b>) the joint arm rises; (<b>c</b>) the joint arm bends; (<b>d</b>) the gripper cuts the stem; (<b>e</b>) the gripper clamps the stem; (<b>f</b>) the arm is lowered; (<b>g</b>) the gripper releases the stem; (<b>h</b>) the mobile platform moves to the next target. Images (<b>i</b>–<b>l</b>) respectively illustrate the lifting and bending of arm toward the strawberry stem (<b>i</b>,<b>j</b>), the gripping action (<b>k</b>), and finally the arm in a lowered position (<b>l</b>).</p>
Full article ">
16 pages, 1253 KiB  
Article
State Estimation for Quadruped Robots on Non-Stationary Terrain via Invariant Extended Kalman Filter and Disturbance Observer
by Mingfei Wan, Daoguang Liu, Jun Wu, Li Li, Zhangjun Peng and Zhigui Liu
Sensors 2024, 24(22), 7290; https://doi.org/10.3390/s24227290 - 14 Nov 2024
Viewed by 321
Abstract
Quadruped robots possess significant mobility in complex and uneven terrains due to their outstanding stability and flexibility, making them highly suitable in rescue missions, environmental monitoring, and smart agriculture. With the increasing use of quadruped robots in more demanding scenarios, ensuring accurate and [...] Read more.
Quadruped robots possess significant mobility in complex and uneven terrains due to their outstanding stability and flexibility, making them highly suitable in rescue missions, environmental monitoring, and smart agriculture. With the increasing use of quadruped robots in more demanding scenarios, ensuring accurate and stable state estimation in complex environments has become particularly important. Existing state estimation algorithms relying on multi-sensor fusion, such as those using IMU, LiDAR, and visual data, often face challenges on non-stationary terrains due to issues like foot-end slippage or unstable contact, leading to significant state drift. To tackle this problem, this paper introduces a state estimation algorithm that integrates an invariant extended Kalman filter (InEKF) with a disturbance observer, aiming to estimate the motion state of quadruped robots on non-stationary terrains. Firstly, foot-end slippage is modeled as a deviation in body velocity and explicitly included in the state equations, allowing for a more precise representation of how slippage affects the state. Secondly, the state update process integrates both foot-end velocity and position observations to improve the overall accuracy and comprehensiveness of the estimation. Lastly, a foot-end contact probability model, coupled with an adaptive covariance adjustment strategy, is employed to dynamically modulate the influence of the observations. These enhancements significantly improve the filter’s robustness and the accuracy of state estimation in non-stationary terrain scenarios. Experiments conducted with the Jueying Mini quadruped robot on various non-stationary terrains show that the enhanced InEKF method offers notable advantages over traditional filters in compensating for foot-end slippage and adapting to different terrains. Full article
(This article belongs to the Section Sensors and Robotics)
25 pages, 2072 KiB  
Article
Full Forward Kinematics of Lower-Mobility Planar Parallel Continuum Robots
by Oscar Altuzarra, Mónica Urizar, Kerman Bilbao and Alfonso Hernández
Mathematics 2024, 12(22), 3562; https://doi.org/10.3390/math12223562 - 14 Nov 2024
Viewed by 259
Abstract
In rigid lower-mobility parallel manipulators the motion of the end-effector is partially constrained due to a combination of passive kinematic pairs and rigid components. Translational mechanisms, such as the Delta manipulator, are the most common ones among this type of mechanisms. When flexible [...] Read more.
In rigid lower-mobility parallel manipulators the motion of the end-effector is partially constrained due to a combination of passive kinematic pairs and rigid components. Translational mechanisms, such as the Delta manipulator, are the most common ones among this type of mechanisms. When flexible elements are introduced, as in Parallel Continuum Manipulators, the constraint is no longer rigid, and new challenges arise in performing certain motions depending on the degree of compliance. Mobility analysis shifts from being purely a geometric issue to one that heavily relies on force distribution within the mechanism. Simply converting classical lower-mobility rigid parallel mechanisms into Parallel Continuum Mechanisms may yield unexpected outcomes. This work, making use of a planar parallel continuum Delta manipulator, on the one hand, presents two different approaches to solve the Forward Kinematics of planar continuum manipulators, and, on the other hand, explores some challenges and issues in assessing the resultant workspace for different design alternatives of this kind of flexible manipulators. Full article
(This article belongs to the Special Issue Applied Mathematics to Mechanisms and Machines II)
Show Figures

Figure 1

Figure 1
<p>Planar translational rigid mechanism: Workspaces for variable input stroke limits.</p>
Full article ">Figure 2
<p>Planar quasi-translational flexible mechanism.</p>
Full article ">Figure 3
<p>Flowchart for Mechanism’s Design Analysis.</p>
Full article ">Figure 4
<p>Flexible clamped−hinged planar rod under force <span class="html-italic">R</span> at the end extreme.</p>
Full article ">Figure 5
<p>Space of parameters <span class="html-italic">k</span> (modulus of the elliptic integral) and <math display="inline"><semantics> <mi>ψ</mi> </semantics></math> (amplitude).</p>
Full article ">Figure 6
<p>Flowchart for Forward Kinematics Direct Integration Solver.</p>
Full article ">Figure 7
<p>Flowchart for Forward Kinematics Multiple Solution Solver.</p>
Full article ">Figure 8
<p>Flowchart for workspace analysis with Multiple Solution Solver.</p>
Full article ">Figure 9
<p>Flowchart for workspace analysis with Wave Propagation.</p>
Full article ">Figure 10
<p>Four different design models according to the connection of the rods.</p>
Full article ">Figure 11
<p>Symmetric overconstrained model.</p>
Full article ">Figure 12
<p>Time–Tolerance–Total Solution number distribution.</p>
Full article ">Figure 13
<p>Stable assembly modes for Design model 1.</p>
Full article ">Figure 14
<p>Time–Tolerance–Useful Solution number distribution.</p>
Full article ">Figure 15
<p>Design 1: Workspace for Assembly modes 1 and 2 (no load is applied).</p>
Full article ">Figure 16
<p>Design 1 (AM 1): Workspace with maximum payload at point <math display="inline"><semantics> <mi mathvariant="script">P</mi> </semantics></math>.</p>
Full article ">Figure 17
<p>Design 1 (AM 1): Workspace comparison for different loads.</p>
Full article ">Figure 18
<p>Design 2: Workspace of the potentially useful assembly modes.</p>
Full article ">Figure 19
<p>Design 1: Workspace of the reference and optimized design.</p>
Full article ">Figure 20
<p>Workspace subjected to maximum payload in reference and Optimized design 1.</p>
Full article ">Figure 21
<p>Workspace of the symmetric overconstrained design.</p>
Full article ">Figure 22
<p>Symmetric design with maximum payload centered at end-effector.</p>
Full article ">Figure 23
<p>Workspace load influence for symmetric overconstrained design.</p>
Full article ">
65 pages, 2015 KiB  
Review
Parallel–Serial Robotic Manipulators: A Review of Architectures, Applications, and Methods of Design and Analysis
by Anton Antonov
Machines 2024, 12(11), 811; https://doi.org/10.3390/machines12110811 - 14 Nov 2024
Viewed by 183
Abstract
Parallel–serial (hybrid) manipulators represent robotic systems composed of kinematic chains with parallel and serial structures. These manipulators combine the benefits of both parallel and serial mechanisms, such as increased stiffness, high positioning accuracy, and a large workspace. This study discusses the existing architectures [...] Read more.
Parallel–serial (hybrid) manipulators represent robotic systems composed of kinematic chains with parallel and serial structures. These manipulators combine the benefits of both parallel and serial mechanisms, such as increased stiffness, high positioning accuracy, and a large workspace. This study discusses the existing architectures and applications of parallel–serial robots and the methods of their design and analysis. The paper reviews around 500 articles and presents over 150 architectures of manipulators used in machining, medicine, and pick-and-place tasks, humanoids and legged systems, haptic devices, simulators, and other applications, covering both lower mobility and kinematically redundant robots. After that, the paper considers how researchers have developed and analyzed these manipulators. In particular, it examines methods of type synthesis, mobility, kinematic, and dynamic analysis, workspace and singularity determination, performance evaluation, optimal design, control, and calibration. The review concludes with a discussion of current trends in the field of parallel–serial manipulators and potential directions for future studies. Full article
21 pages, 1712 KiB  
Review
Autonomous Mobile Robots Inclusive Building Design for Facilities Management: Comprehensive PRISMA Review
by Zhi Qing Lim, Kwok Wei Shah and Meenakshi Gupta
Buildings 2024, 14(11), 3615; https://doi.org/10.3390/buildings14113615 - 14 Nov 2024
Viewed by 329
Abstract
The increasing adoption of advanced technologies and the growing demand for automation have driven the development of innovative solutions for smart Facilities Management (FM). The COVID-19 pandemic accelerated this trend, highlighting the need for greater automation in FM, including the use of Autonomous [...] Read more.
The increasing adoption of advanced technologies and the growing demand for automation have driven the development of innovative solutions for smart Facilities Management (FM). The COVID-19 pandemic accelerated this trend, highlighting the need for greater automation in FM, including the use of Autonomous Mobile Robots (AMRs). Despite this momentum, AMR adoption remains in its early stages, with limited knowledge and research available on their practical applications in FM. This study seeks to explore the challenges that hinder the successful integration of AMRs in the FM industry. To achieve this, a systematic literature review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology, encompassing three phases: identification, screening, and inclusion. The review covered 80 full-text articles published from 1994 to 2024, reflecting the growing interest in technological advancements for FM and the increased focus on AMR research. The study identified five key barriers specific to FM that affect AMR adoption: diverse operational contexts, poorly designed indoor environments, varying building occupants, multi-faceted FM functionalities, and differences in building exteriors. These findings provide a comprehensive understanding of the unique challenges faced by FM professionals, offering valuable insights for organizations and AMR developers to consider during the adoption process. The research contributes to the field by providing a foundation for FM practitioners, policymakers, and researchers to develop strategies for overcoming these barriers and advancing the adoption of AMR technologies in FM. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

Figure 1
<p>Adapted PRISMA flow diagram [<a href="#B41-buildings-14-03615" class="html-bibr">41</a>].</p>
Full article ">Figure 2
<p>Bibliometric networks of AMRs research for co-occurrence of keywords using VOSviewer.</p>
Full article ">Figure 3
<p>Bibliometric networks of AMRs research for co-authorship analysis by countries.</p>
Full article ">Figure 4
<p>A distribution of the articles published over the years.</p>
Full article ">
16 pages, 9423 KiB  
Article
EchoPT: A Pretrained Transformer Architecture That Predicts 2D In-Air Sonar Images for Mobile Robotics
by Jan Steckel, Wouter Jansen and Nico Huebel
Biomimetics 2024, 9(11), 695; https://doi.org/10.3390/biomimetics9110695 - 13 Nov 2024
Viewed by 344
Abstract
The predictive brain hypothesis suggests that perception can be interpreted as the process of minimizing the error between predicted perception tokens generated via an internal world model and actual sensory input tokens. When implementing working examples of this hypothesis in the context of [...] Read more.
The predictive brain hypothesis suggests that perception can be interpreted as the process of minimizing the error between predicted perception tokens generated via an internal world model and actual sensory input tokens. When implementing working examples of this hypothesis in the context of in-air sonar, significant difficulties arise due to the sparse nature of the reflection model that governs ultrasonic sensing. Despite these challenges, creating consistent world models using sonar data is crucial for implementing predictive processing of ultrasound data in robotics. In an effort to enable robust robot behavior using ultrasound as the sole exteroceptive sensor modality, this paper introduces EchoPT (Echo-Predicting Pretrained Transformer), a pretrained transformer architecture designed to predict 2D sonar images from previous sensory data and robot ego-motion information. We detail the transformer architecture that drives EchoPT and compare the performance of our model to several state-of-the-art techniques. In addition to presenting and evaluating our EchoPT model, we demonstrate the effectiveness of this predictive perception approach in two robotic tasks. Full article
(This article belongs to the Special Issue Artificial Intelligence for Autonomous Robots: 3rd Edition)
Show Figures

Figure 1

Figure 1
<p>Overview of the experimental setup. Panel (<b>a</b>) shows the simulation environment in which a two-wheeled robot drives. A sketch of the robot is shown in panel (<b>c</b>). The robot uses an array-based imaging sonar sensor panel (<b>g</b>) capable of generating range-direction energy maps (called energyscapes), shown in panels (<b>d</b>–<b>f</b>). This sensor is modeled in the simulation environment based on accurate models of acoustic propagation and reflection. Panel (<b>b</b>) shows what is called the acoustic flow model. This model predicts how objects in the sensor scene move through the perceptive field based on a certain robot motion. The blue flow lines are shown for a linear robot motion. Panels (<b>d</b>–<b>f</b>) show the task that is being solved in this paper: how can novel sensor views be synthesized given a certain set of robot velocity commands <math display="inline"><semantics> <mfenced open="[" close="]"> <mtable> <mtr> <mtd> <msub> <mi>v</mi> <mrow> <mi>l</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </mtd> <mtd> <msub> <mi>ω</mi> <mi>r</mi> </msub> </mtd> </mtr> </mtable> </mfenced> </semantics></math>? Each of these velocities has a time-step index, as shown in panels (<b>d</b>–<b>f</b>). Panel (<b>d</b>) shows the prediction based on the naive shifting of the image in the range and direction dimensions. Panel (<b>e</b>) shows the operation using the acoustic flow model of panel (<b>b</b>). Both of these operators can only use the last frame to perform the prediction. Panel (<b>f</b>) shows the EchoPT model, which takes in <span class="html-italic">n</span> previous frames and velocity commands and predicts the novel view using a transformer neural network.</p>
Full article ">Figure 2
<p>Overview of the network architecture of EchoPT. The EchoPT model has two inputs: the set of <span class="html-italic">n</span> previous input frames (set to three in this paper) and the <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </semantics></math> velocity commands (three previous and one for the prediction). The model has three main parallel branches: a transformer branch, a feed-forward convolutional branch for the sonar images, and an MLP (multi-layer perceptron) pipeline using the velocity commands as input. These three branches are depth-concatenated and passed through more feed-forward convolutional layers to obtain a single output image.</p>
Full article ">Figure 3
<p>Condensed version of <a href="#biomimetics-09-00695-f0A1" class="html-fig">Figure A1</a> in <a href="#app1-biomimetics-09-00695" class="html-app">Appendix A</a>. Panel (<b>a</b>) shows the target sonar image, and panel (<b>b</b>) shows the predicted image. Panel (<b>c</b>) shows the difference between the two images, and panels (<b>d</b>,<b>e</b>) show the 2D correlogram.</p>
Full article ">Figure 4
<p>Prediction results of a single frame using three prediction methods: the naive operation, which shifts the image in the range and direction dimensions; the acoustic flow approach, which uses the acoustic flow equations to transform the image; and finally, the EchoPT prediction.</p>
Full article ">Figure 5
<p>A first application of predictive processing in which a robot performs a trajectory in the environment from <a href="#biomimetics-09-00695-f001" class="html-fig">Figure 1</a>. In two periods (between 10 s and 16 s and between 30 s and 36 s), the robot encounters slip conditions (meaning the robot is not performing the motion that the robot expects to perform). In the first section, the robot is slipping on both wheels; in the second condition, only one wheel slips. The plots show the slip detector, which uses differences in the predicted and measured sensor data for different prediction horizons (one-shot, three-frame auto-regressive, and five-frame auto-regressive). Longer time horizons provide the clearest slip detection signal, with EchoPT being the only one that detects the second slip condition. Panel (<b>a</b>) shows the results for using the naive predictor, panel (<b>b</b>) for the acoustic flow predictor and panel (<b>c</b>) for the EchoPT predictor.</p>
Full article ">Figure 6
<p>A second application of predictive processing in which a robot is tasked with driving from the green rectangular spawn boxes to the waypoint indicated by the green circles, using a subsumption-based control stack described in [<a href="#B13-biomimetics-09-00695" class="html-bibr">13</a>]. Panel (<b>a</b>) shows the kernel density estimate of 50 runs with clean sensor data (signal-to-noise ratio, SNR = 5 dB). In panels (<b>b</b>,<b>c</b>), we added intermittent noise to the measured sensor data (shown in panel f, SNR = −80 dB). In panel (<b>b</b>), the original controller was used, showing the traversed paths’ deterioration. In panel (<b>c</b>), sensor data were predicted in an auto-regressive manner using EchoPT for the duration of the noise bursts and fed into the controller instead of the noisy data. Panel (<b>d</b>) shows the travel time for the robot in the three conditions, showing a large increase in travel time for the controller from panel (<b>b</b>). Panel (<b>e</b>) shows the deviation from the midline of the corridor, again showing a large deviation when no predictive processing is used. Panel (<b>f</b>) shows a small section of the evolution of the SNR over time.</p>
Full article ">Figure A1
<p>Detailed overview of some EchoPT predictions. Given a sequence of sonar images, T1 to T4 (panels (<b>a</b>–<b>d</b>)), with a robot performing a linear motion in a corridor, the EchoPT model predicts T4 (predicted) in panel (<b>e</b>). Panels (<b>f</b>–<b>i</b>) show the difference between T4 (predicted) and T1 to T4. These plots show that the model can capture the motion model of the sensor modality, as the errors between T4 and T4 (predicted) are near zero. The differences with the older images clearly show that the robot has learned to incorporate the sensor flow data. Panels (<b>j</b>–<b>n</b>) show the 2D correlograms between the prediction and the input data.</p>
Full article ">Figure A2
<p>Prediction of sonar images using an auto-regressive prediction model for the three prediction systems used in this paper (naive, acoustic flow, and EchoPT). As the robot motions are relatively small, the difference between the images is not clearly visible. In <a href="#biomimetics-09-00695-f0A3" class="html-fig">Figure A3</a>, we show the differences between the subsequent images, as this illustrates much more clearly what the advantage of the EchoPT model is over the other techniques.</p>
Full article ">Figure A3
<p>Prediction errors using an auto-regressive prediction model for the three prediction systems described. The deeper the prediction horizon, the larger the errors in the data predictions get (very noticeable in frame 6). The EchoPT model maintains the smallest prediction errors, indicating the capability of the model to perform predictions over long time horizons. It should be noted that, after frame 3, no measured data are used in EchoPT, but it purely relies on previous predictions to estimate the new data frame.</p>
Full article ">
17 pages, 753 KiB  
Article
Fixed-Time Event-Triggered Control of Nonholonomic Mobile Robots with Uncertain Dynamics and Preassigned Transient Performance
by Yong Wang, Yunfeng Ji, Wei Li and Xi Fang
Mathematics 2024, 12(22), 3544; https://doi.org/10.3390/math12223544 - 13 Nov 2024
Viewed by 241
Abstract
In this paper, a novel adaptive control scheme is proposed for the path-following problem of a nonholonomic mobile robot with uncertain dynamics based on barrier functions. To optimize communication resources, we integrate an event-triggered mechanism that avoids Zeno behavior and ensures that the [...] Read more.
In this paper, a novel adaptive control scheme is proposed for the path-following problem of a nonholonomic mobile robot with uncertain dynamics based on barrier functions. To optimize communication resources, we integrate an event-triggered mechanism that avoids Zeno behavior and ensures that the tracking error of the closed-loop system converges to a small neighborhood around zero within a fixed time, while consistently satisfying predefined transient performance requirements. Extensive simulation studies demonstrate the effectiveness of the proposed approach and validate the theoretical results. Full article
Show Figures

Figure 1

Figure 1
<p>Two-wheeled nonholonomic mobile robot.</p>
Full article ">Figure 2
<p>Interpretation of path-tracking errors for the mobile robot.</p>
Full article ">Figure 3
<p>Control structure diagram of nonholonomic mobile robot.</p>
Full article ">Figure 4
<p>Robot position in (x, y) plane compared with the reference trajectory.</p>
Full article ">Figure 5
<p><math display="inline"><semantics> <msub> <mi>z</mi> <mi>e</mi> </msub> </semantics></math> and the prescribed performance bounds.</p>
Full article ">Figure 6
<p><math display="inline"><semantics> <msub> <mi>θ</mi> <mi>e</mi> </msub> </semantics></math> and the prescribed performance bounds.</p>
Full article ">Figure 7
<p>Interval of triggering events of <math display="inline"><semantics> <msub> <mi>u</mi> <mn>1</mn> </msub> </semantics></math>.</p>
Full article ">Figure 8
<p>Interval of triggering events of <math display="inline"><semantics> <msub> <mi>u</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 9
<p>Comparison of event-triggered effect of different event-triggered parameters.</p>
Full article ">Figure 10
<p>Robot positions with different initial conditions.</p>
Full article ">
30 pages, 4171 KiB  
Review
Animal-Morphing Bio-Inspired Mechatronic Systems: Research Framework in Robot Design to Enhance Interplanetary Exploration on the Moon
by José Cornejo, Cecilia E. García Cena and José Baca
Biomimetics 2024, 9(11), 693; https://doi.org/10.3390/biomimetics9110693 - 13 Nov 2024
Viewed by 379
Abstract
Over the past 50 years, the space race has potentially grown due to the development of sophisticated mechatronic systems. One of the most important is the bio-inspired mobile-planetary robots, actually for which there is no reported one that currently works physically on the [...] Read more.
Over the past 50 years, the space race has potentially grown due to the development of sophisticated mechatronic systems. One of the most important is the bio-inspired mobile-planetary robots, actually for which there is no reported one that currently works physically on the Moon. Nonetheless, significant progress has been made to design biomimetic systems based on animal morphology adapted to sand (granular material) to test them in analog planetary environments, such as regolith simulants. Biomimetics and bio-inspired attributes contribute significantly to advancements across various industries by incorporating features from biological organisms, including autonomy, intelligence, adaptability, energy efficiency, self-repair, robustness, lightweight construction, and digging capabilities-all crucial for space systems. This study includes a scoping review, as of July 2024, focused on the design of animal-inspired robotic hardware for planetary exploration, supported by a bibliometric analysis of 482 papers indexed in Scopus. It also involves the classification and comparison of limbed and limbless animal-inspired robotic systems adapted for movement in soil and sand (locomotion methods such as grabbing-pushing, wriggling, undulating, and rolling) where the most published robots are inspired by worms, moles, snakes, lizards, crabs, and spiders. As a result of this research, this work presents a pioneering methodology for designing bio-inspired robots, justifying the application of biological morphologies for subsurface or surface lunar exploration. By highlighting the technical features of actuators, sensors, and mechanisms, this approach demonstrates the potential for advancing space robotics, by designing biomechatronic systems that mimic animal characteristics. Full article
Show Figures

Figure 1

Figure 1
<p>Adapted PRISMA flow diagram of the search process. CR: Crab, MO: Mole, WO: Worm, LZ: Lizard, SN: Snake. SP: Spider, SF-X: Surface exploration, SSF-X: Subsurface exploration. The numbers mean the quantity of published articles.</p>
Full article ">Figure 2
<p>Novel proposal of design methodology for space planetary bio-robots, it starts with the INPUT: Selection of animal-specie, and finishes with the OUTPUT: Prototype. Note: Analog Environment is defined as terrestrial locations that exhibit geological or environmental conditions analogous to celestial bodies, like the Moon or Mars. Source: Original contribution.</p>
Full article ">Figure 3
<p><span class="html-italic">Subsurface Exploration</span>: (<b>I</b>) Crab, Emerita Analoga (Standard Copyright Licence transferred to the authors) Adapted with permission from Bandersnatch(1808981506)/<a href="http://Shutterstock.com" target="_blank">Shutterstock.com</a> (accessed on 8 July 2024).—(<b>A</b>) Hardware components and full assembly, including the cuticle design, homing hall effect sensors, and retractable fabric leg design. Reproduced from [<a href="#B48-biomimetics-09-00693" class="html-bibr">48</a>]. CC BY 4.0. (<b>II</b>) Mole, Eremitalpa Granti (Standard Copyright Licence transferred to the authors) Adapted with permission from Anthony Bannister(MFFHY0)/<a href="http://Shutterstock.com" target="_blank">Shutterstock.com</a> (accessed on 9 July 2024).—(<b>B.1</b>) Design of the cable-driven burrowing force amplification mechanism. (<b>B.2</b>) System configuration. Reprinted from [<a href="#B50-biomimetics-09-00693" class="html-bibr">50</a>], Copyright (2023), with permission from IEEE. (<b>B.3</b>) Motion process during burrowing. (<b>B.4</b>) Prototype experiment and model angle measurement. Reprinted from [<a href="#B52-biomimetics-09-00693" class="html-bibr">52</a>], Copyright (2023), with permission from IEEE. Note: The left column shows the animal, while the right column represents the bio-inspired robot.</p>
Full article ">Figure 4
<p>Subsurface Exploration: (<b>I</b>) Worm, Eunice Aphroditois (Standard Copyright License transferred to the authors) Adapted with permission from Cingular(1219459138)/<a href="http://Shutterstock.com" target="_blank">Shutterstock.com</a> (accessed on 8 July 2024).—(<b>A.1</b>) Robot is mainly made up of three units: a propulsion unit, an excavation unit, and a discharging unit. The propulsion unit contains three additional propulsion subunits and propels through a borehole by reproducing the peristaltic crawling motion of an earthworm. Moreover, the propulsion unit allows LEAVO to excavate deep underground by supporting the reaction torque/force of the excavation by gripping the wall of the borehole. The excavation unit mainly includes an excavation instrument, namely, an “earth auger”, and a casing pipe covering the earth auger. The excavation unit excavates soil and transports it to the back of the robot. The soil in the back of the robot is discharged out of the borehole using the discharging unit. Reprinted from [<a href="#B59-biomimetics-09-00693" class="html-bibr">59</a>], Copyright (2018), with permission from IEEE. (<b>A.2</b>) Bio-inspired PSA modules are assembled in series using interconnections to form a soft robot with passive setae-like friction pads on its ventral side. (<b>A.3</b>) Working principle of the actuator with positive and negative pressure compared to the muscular motion observed in earthworm segments. Reproduced from [<a href="#B69-biomimetics-09-00693" class="html-bibr">69</a>]. CC BY 4.0. Surface Exploration: (<b>II</b>) Snake, Sonora Occipitalis (Standard Copyright License transferred to the authors) Adapted with permission from Matt Jeppson(86483413)/<a href="http://Shutterstock.com" target="_blank">Shutterstock.com</a> (accessed on 8 July 2024).—(<b>B.1</b>) An overview of the snake robot locomotion experiment. The snake robot is moving on granular terrain. A single DC motor drives the robot to generate sidewinding locomotion. The motion capture system captures the motion data through five reflective markers on the snake robot. (<b>B.2</b>) Fabrication of the continuous snake robot with a single rotary motor. Different mounting holes on the head anchor are used to adjust the slope angle. Basins assemble the body shells. (<b>B.3</b>) A cylindrical helix rod with two coils is made by 3D printing. (<b>B.4</b>) 3D printed body shells are linked to form a robot snake shell. (<b>B.5</b>) the helix rod is put into the body shells to form the snake robot body. (<b>B.6</b>) The snake robot body is filmed with silicone elastomers to improve the friction coefficient; (<b>B.7</b>) Prototype of snake robot after painting. Reprinted from [<a href="#B83-biomimetics-09-00693" class="html-bibr">83</a>], Copyright (2023), with permission from IEEE. Note: The left column shows the animal, while the right column represents the bio-inspired robot.</p>
Full article ">Figure 5
<p>Surface Exploration: (<b>I</b>) Lizard, Scincus Scincus (Standard Copyright License transferred to the authors) Adapted with permission from Kurit afshen(2358731213)/<a href="http://Shutterstock.com" target="_blank">Shutterstock.com</a>, (accessed on 8 July 2024).—(<b>A.1</b>) schematic of robot design—top view, and soft-amphibious robot-Reprinted from [<a href="#B77-biomimetics-09-00693" class="html-bibr">77</a>], Copyright (2017), with permission from IEEE. (<b>A.2</b>) fabricated prototype of the lizard-inspired quadruped robot moving on simulated Mars surface terrains. Reproduced from [<a href="#B79-biomimetics-09-00693" class="html-bibr">79</a>]. CC BY 4.0. (<b>II</b>) Spider, Carparachne Aureoflava (Standard Copyright License transferred to the authors) Adapted with permission from Tobias Hauke(1958871052)/<a href="http://Shutterstock.com" target="_blank">Shutterstock.com</a> (accessed on 8 July 2024).—(<b>B.1</b>) 4 legged-system showing the pitch, roll, and yaw servo motors associated with the hemispherical limbs while the robot is in the crawling posture. (<b>B.2</b>) Bio-inspired reconfigurable prototype. Reproduced from [<a href="#B88-biomimetics-09-00693" class="html-bibr">88</a>]. CC BY 4.0. Note: The left column shows the animal, while the right column represents the bio-inspired robot.</p>
Full article ">
19 pages, 11922 KiB  
Article
Changing the Formations of Unmanned Aerial Vehicles
by Krzysztof Falkowski and Maciej Kurenda
Appl. Sci. 2024, 14(22), 10424; https://doi.org/10.3390/app142210424 - 13 Nov 2024
Viewed by 161
Abstract
The development of hierarchical structures of unmanned aerial vehicles (UAVs) increases the efficiency of unmanned aerial systems. The grouping of UAVs increases the region of recognition or force of assault. Achieving these requirements is possible through a UAV formation. The UAVs in the [...] Read more.
The development of hierarchical structures of unmanned aerial vehicles (UAVs) increases the efficiency of unmanned aerial systems. The grouping of UAVs increases the region of recognition or force of assault. Achieving these requirements is possible through a UAV formation. The UAVs in the formation must be controlled and managed by a commander, but the commander cannot control individual UAVs. The UAVs within the formation have assigned specific individual tasks, so is possible to achieve the flight of the formation with minimum collisions between UAVs and maximized equipment utilization. This paper aims to present a method of formation control for multiple UAVs that allows dynamic changes in the constellations of UAVs. The article includes the results of tests and research conducted in real-world conditions involving a formation capable of adapting its configuration. The results are presented as an element of research for the autonomy swarm, which can be controlled by one pilot/operator. The control of a swarm consisting of many UAVs (several hundred) by one person is now a current problem. The article presents a fragment of research work on high-autonomy UAV swarms. Here is presented a field test that focuses on UAV constellation control. Full article
Show Figures

Figure 1

Figure 1
<p>Construction of the quadrotor: (<b>a</b>) the power supply and drive module; (<b>b</b>) the control module of the UAV.</p>
Full article ">Figure 2
<p>One of the flying platforms used in the research.</p>
Full article ">Figure 3
<p>The flying platform during tests.</p>
Full article ">Figure 4
<p>Body frame (Ox<sub>b</sub>y<sub>b</sub>z<sub>b</sub>) and Earth frame (Oxyz).</p>
Full article ">Figure 5
<p>The flight controller.</p>
Full article ">Figure 6
<p>Flight manager system.</p>
Full article ">Figure 7
<p>Electronic equipment of the UAV.</p>
Full article ">Figure 8
<p>The measured altitude: test of the vertical UAV movement.</p>
Full article ">Figure 9
<p>The measured speed: test of the vertical UAV movement.</p>
Full article ">Figure 10
<p>The measured speed: test of the horizontal UAV movement.</p>
Full article ">Figure 11
<p>The measured altitude: test of the horizontal UAV movement.</p>
Full article ">Figure 12
<p>The measured heading: test of the horizontal UAV movement.</p>
Full article ">Figure 13
<p>Flight formation configurations: (<b>a</b>) the line formation; (<b>b</b>) the triangle formation.</p>
Full article ">Figure 14
<p>Horizontal separations in formation.</p>
Full article ">Figure 15
<p>Movement to successive positions in the formation.</p>
Full article ">Figure 16
<p>UAVs organized in a line formation.</p>
Full article ">Figure 17
<p>UAVs organized in a triangle formation.</p>
Full article ">Figure 18
<p>The transition from a line formation to a triangle formation.</p>
Full article ">Figure 19
<p>Change in the altitude of UAVs—the last step of reconfiguration from the line to the triangle formation.</p>
Full article ">Figure 20
<p>Change in the altitude of UAVs—the first step of reconfiguration from the line to the triangle formation.</p>
Full article ">Figure 21
<p>Change in the vertical speed of UAVs during altitude increase.</p>
Full article ">Figure 22
<p>Change in the vertical speed of UAVs during descent.</p>
Full article ">Figure 23
<p>The result of formation reconfiguration from line to triangle.</p>
Full article ">Figure 24
<p>Change in the horizontal speed during the change position (<a href="#applsci-14-10424-f009" class="html-fig">Figure 9</a>)—the second step of reconfiguration from the line to the triangle formation.</p>
Full article ">Figure 25
<p>Changes in altitude of UAVs during multiple changes in flight formation.</p>
Full article ">Figure 26
<p>Changes in the vertical speed of UAVs during multiple changes in flight formation.</p>
Full article ">Figure 27
<p>Changes in the horizontal speed of UAVs during multiple changes in flight formation.</p>
Full article ">Figure 28
<p>Line formation (<b>a</b>) and triangle formation (<b>b</b>).</p>
Full article ">
17 pages, 7398 KiB  
Article
Supported Influence Mapping for Mobile Robot Pathfinding in Dynamic Indoor Environments
by Paweł Stawarz, Dominik Ozog and Wojciech Łabuński
Sensors 2024, 24(22), 7240; https://doi.org/10.3390/s24227240 - 13 Nov 2024
Viewed by 236
Abstract
Pathfinding is the process of finding the lowest cost route between a pair of points in space. The aforementioned cost can be based on time, distance, the number of required turns, and other individual or complex criteria. Pathfinding in dynamic environments is a [...] Read more.
Pathfinding is the process of finding the lowest cost route between a pair of points in space. The aforementioned cost can be based on time, distance, the number of required turns, and other individual or complex criteria. Pathfinding in dynamic environments is a complex issue, which has a long history of academic interest. An environment is considered dynamic when its topology may change in real time, often due to human interference. Influence mapping is a solution originating from the field of video games, which was previously used to solve similar problems in virtual environments, but achieved mixed results in real-life scenarios. The purpose of this study was to find whether the algorithm could be used in real indoor environments when combined with information collected by remote sensors. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

Figure 1
<p>Visualization of an influence map with interesting elements marked. (<b>A</b>) Stationary positive influence source. (<b>B</b>) Negative influence source moving forward and leaving an influence trail. (<b>C</b>) Overlap of inverse influences yields a neutral zone. (<b>D</b>) Obstacles stop influencing propagation. (<b>E</b>) Non-point influence source.</p>
Full article ">Figure 2
<p>Diagram of the operating environment, including an influence map with the planned robot trajectory marked.</p>
Full article ">Figure 3
<p>One of the robots used during the research.</p>
Full article ">Figure 4
<p>Communication flowchart in the study.</p>
Full article ">Figure 5
<p>The sum of the influence of values in the entire graph.</p>
Full article ">
25 pages, 3646 KiB  
Article
Application of Compensation Algorithms to Control the Speed and Course of a Four-Wheeled Mobile Robot
by Gennady Shadrin, Alexander Krasavin, Gaukhar Nazenova, Assel Kussaiyn-Murat, Albina Kadyroldina, Tamás Haidegger and Darya Alontseva
Sensors 2024, 24(22), 7233; https://doi.org/10.3390/s24227233 - 12 Nov 2024
Viewed by 381
Abstract
This article presents a tuned control algorithm for the speed and course of a four-wheeled automobile-type robot as a single nonlinear object, developed by the analytical approach of compensation for the object’s dynamics and additive effects. The method is based on assessment of [...] Read more.
This article presents a tuned control algorithm for the speed and course of a four-wheeled automobile-type robot as a single nonlinear object, developed by the analytical approach of compensation for the object’s dynamics and additive effects. The method is based on assessment of external effects and as a result new, advanced feedback features may appear in the control system. This approach ensures automatic movement of the object with accuracy up to a given reference filter, which is important for stable and accurate control under various conditions. In the process of the synthesis control algorithm, an inverse mathematical model of the robot was built, and reference filters were developed for a closed-loop control system through external effect channels, providing the possibility of physical implementation of the control algorithm and compensation of external effects through feedback. This combined approach allows us to take into account various effects on the robot and ensure its stable control. The developed algorithm provides control of the robot both when moving forward and backward, which expands the capabilities of maneuvering and planning motion trajectories and is especially important for robots working in confined spaces or requiring precise movement into various directions. The efficiency of the algorithm is demonstrated using a computer simulation of a closed-loop control system under various external effects. It is planned to further develop a digital algorithm for implementation on an onboard microcontroller, in order to use the new algorithm in the overall motion control system of a four-wheeled mobile robot. Full article
Show Figures

Figure 1

Figure 1
<p>Inverse system model based on feedforward control system.</p>
Full article ">Figure 2
<p>Inverse system model based on feedback control system.</p>
Full article ">Figure 3
<p>Inverse model of the control object as a signal converter.</p>
Full article ">Figure 4
<p>Series connection of the signal converter (“reference filter”) and the inverse model of the control object.</p>
Full article ">Figure 5
<p>The diagram of the robot’s location on a plane in fixed coordinates <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">x</mi> </mrow> <mrow> <mi mathvariant="normal">N</mi> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">y</mi> </mrow> <mrow> <mi mathvariant="normal">N</mi> </mrow> </msub> </mrow> </semantics></math>. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">x</mi> </mrow> <mrow> <mn>01</mn> </mrow> </msub> </mrow> </semantics></math>—robot speed; <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">x</mi> </mrow> <mrow> <mn>02</mn> </mrow> </msub> </mrow> </semantics></math>—front wheel steering angle; <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">x</mi> </mrow> <mrow> <mn>03</mn> </mrow> </msub> </mrow> </semantics></math>—robot course.</p>
Full article ">Figure 6
<p>The connection of the curvature and the trajectory and the angular velocity of the mobile robot.</p>
Full article ">Figure 7
<p>A schematic diagram of the steering wheel angle and the radius of the circle tangent to the trajectory.</p>
Full article ">Figure 8
<p>A block diagram of the robot’s speed and course control system.</p>
Full article ">Figure 9
<p>Transient processes in the robot control system during single-step changes in speed and heading tasks and forward movement. The designations of the variables correspond to their designations in Equations (25) and (52).</p>
Full article ">Figure 10
<p>Transient processes in the robot control system during single-step changes in speed and heading tasks and backward movement.</p>
Full article ">Figure 11
<p>Transient processes in the robot control system during single-step changes in speed and heading tasks. The speed command changes 3 s after the heading command was changed.</p>
Full article ">Figure 12
<p>Transient processes in the robot control system with a single-step change in the speed task and 3 radians per course.</p>
Full article ">Figure 13
<p>The robot control signals presented (<b>top figure</b>) in the case when the movement of the robot was in fixed coordinates (<b>bottom figure</b>) and when the course assignment changed by ±180 degrees every 10 s.</p>
Full article ">Figure 14
<p>The robot maneuvers when moving back and forth.</p>
Full article ">Figure 15
<p>Transient processes in the robot control system when sequentially changing the coefficients <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>k</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> … <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>k</mi> </mrow> <mrow> <mn>4</mn> </mrow> </msub> </mrow> </semantics></math> of the robot’s mathematical model by ±50% relative to their calculated values while tuning the regulator to the calculated values.</p>
Full article ">Figure 16
<p>Transient processes in the robot control system at a nominal speed of 1 m/s and a course of 1 radian after 5 s and under the influence of disturbances.</p>
Full article ">
32 pages, 3912 KiB  
Article
Proposed Multi-ST Model for Collaborating Multiple Robots in Dynamic Environments
by Hai Van Pham, Huy Quoc Do, Minh Nguyen Quang, Farzin Asadi and Philip Moore
Machines 2024, 12(11), 797; https://doi.org/10.3390/machines12110797 - 11 Nov 2024
Viewed by 401
Abstract
Coverage path planning describes the process of finding an effective path robots can take to traverse a defined dynamic operating environment where there are static (fixed) and dynamic (mobile) obstacles that must be located and avoided in coverage path planning. However, most coverage [...] Read more.
Coverage path planning describes the process of finding an effective path robots can take to traverse a defined dynamic operating environment where there are static (fixed) and dynamic (mobile) obstacles that must be located and avoided in coverage path planning. However, most coverage path planning methods are limited in their ability to effectively manage the coordination of multiple robots operating in concert. In this paper, we propose a novel coverage path planning model (termed Multi-ST) which utilizes the spiral-spanning tree coverage algorithm with intelligent reasoning and knowledge-based methods to achieve optimal coverage, obstacle avoidance, and robot coordination. In experimental testing, we have evaluated the proposed model with a comparative analysis of alternative current approaches under the same conditions. The reported results show that the proposed model enables the avoidance of static and moving obstacles by multiple robots operating in concert in a dynamic operating environment. Moreover, the results demonstrate that the proposed model outperforms existing coverage path planning methods in terms of coverage quality, robustness, scalability, and efficiency. In this paper, the assumptions, limitations, and constraints applicable to this study are set out along with related challenges, open research questions, and proposed directions for future research. We posit that our proposed approach can provide an effective basis upon which multiple robots can operate in concert in a range of ‘real-world’ domains and systems where coverage path planning and the avoidance of static and dynamic obstacles encountered in completing tasks is a systemic requirement. Full article
(This article belongs to the Special Issue Recent Developments in Machine Design, Automation and Robotics)
Show Figures

Figure 1

Figure 1
<p>A model showing the path-finding procedure of the STA; (<b>a</b>) counterclockwise scanning of four neighbors. (<b>b</b>) a move from x to a new cell y. (<b>c</b>) a return from x to a parent cell w.</p>
Full article ">Figure 2
<p>Special side-edges in STC consisting of (<b>a</b>) double-sided edge; (<b>b</b>) single-side edge; (<b>c</b>) node doubling.</p>
Full article ">Figure 3
<p>The approach to manage (i.e., avoid) special obstacles consisting of (<b>a</b>) partially occupied cell; (<b>b</b>) deformed path crosses spanning tree edges.</p>
Full article ">Figure 4
<p>The proposed Multi-ST Model.</p>
Full article ">Figure 5
<p>The stepwise process implemented in the OMST algorithm. Shown is a scenario where two robots share visited cell information.</p>
Full article ">Figure 6
<p>Two robots operating in the FSST algorithm where robot <math display="inline"><semantics> <msub> <mi>R</mi> <mn>1</mn> </msub> </semantics></math> operates in the blue cells with robot <math display="inline"><semantics> <msub> <mi>R</mi> <mn>2</mn> </msub> </semantics></math> operating in the red cells. The three hatched cells represent cells containing obstacles.</p>
Full article ">Figure 7
<p>The home screen of the system from which a user can select the modes corresponding to the <span class="html-italic">offline</span> and <span class="html-italic">online</span> algorithm.</p>
Full article ">Figure 8
<p>The DOE comprised a 24 × 24 cell grid with four robots.</p>
Full article ">Figure 9
<p>The result for the <span class="html-italic">Multi-ST</span> model with four robots and five random shaped obstructions.</p>
Full article ">Figure 10
<p>The figure shows the spanning trees generated by four robots.</p>
Full article ">Figure 11
<p>The knowledge-base screen illustrating the rule-based system (see <a href="#machines-12-00797-t002" class="html-table">Table 2</a> and <a href="#machines-12-00797-t003" class="html-table">Table 3</a>).</p>
Full article ">Figure 12
<p>The results for multiple robots are operating on a DOE consisting of a 50 × 50 cell grid.</p>
Full article ">Figure 13
<p>The result for the product of [the number of steps the robot has to take × 100 (ms)].</p>
Full article ">Figure 14
<p>An exemplary case where the red convex hull is constrained by the green and black robots.</p>
Full article ">Figure 15
<p>Operating times for robot operation on a 50 × 50 cell map.</p>
Full article ">Figure 16
<p>Operating times for robot operation on a 80 × 80 cell map.</p>
Full article ">Figure 17
<p>Operating time of the robot on a 100 × 100 cell map.</p>
Full article ">
17 pages, 6898 KiB  
Article
SLAM Algorithm for Mobile Robots Based on Improved LVI-SAM in Complex Environments
by Wenfeng Wang, Haiyuan Li, Haiming Yu, Qiuju Xie, Jie Dong, Xiaofei Sun, Honggui Liu, Congcong Sun, Bin Li and Fang Zheng
Sensors 2024, 24(22), 7214; https://doi.org/10.3390/s24227214 - 11 Nov 2024
Viewed by 480
Abstract
The foundation of robot autonomous movement is to quickly grasp the position and surroundings of the robot, which SLAM technology provides important support for. Due to the complex and dynamic environments, single-sensor SLAM methods often have the problem of degeneracy. In this paper, [...] Read more.
The foundation of robot autonomous movement is to quickly grasp the position and surroundings of the robot, which SLAM technology provides important support for. Due to the complex and dynamic environments, single-sensor SLAM methods often have the problem of degeneracy. In this paper, a multi-sensor fusion SLAM method based on the LVI-SAM framework was proposed. First of all, the state-of-the-art feature detection algorithm SuperPoint is used to extract the feature points from a visual-inertial system, enhancing the detection ability of feature points in complex scenarios. In addition, to improve the performance of loop-closure detection in complex scenarios, scan context is used to optimize the loop-closure detection. Ultimately, the experiment results show that the RMSE of the trajectory under the 05 sequence from the KITTI dataset and the Street07 sequence from the M2DGR dataset are reduced by 12% and 11%, respectively, compared to LVI-SAM. In simulated complex environments of animal farms, the error of this method at the starting and ending points of the trajectory is less than that of LVI-SAM, as well. All these experimental comparison results prove that the method proposed in this paper can achieve higher precision and robustness performance in localization and mapping within complex environments of animal farms. Full article
Show Figures

Figure 1

Figure 1
<p>Experimental platform for livestock inspection robot.</p>
Full article ">Figure 2
<p>Experimental conditions and scenarios (the numbers correspond to <a href="#sensors-24-07214-t003" class="html-table">Table 3</a>).</p>
Full article ">Figure 3
<p>The system architecture of our method. The red solid-line boxes (SuperPoint) and the orange solid-line boxes (scan context) are the innovative parts of our method compared with LVI-SAM.</p>
Full article ">Figure 4
<p>Comparison of Shi–Tomasi, ORB, and SuperPoint feature detection. (<b>a</b>) Shi–Tomasi algorithm. (<b>b</b>) ORB algorithm. (<b>c</b>) SuperPoint algorithm. The red circles indicate the feature points extracted by this method.</p>
Full article ">Figure 5
<p>Frame 265 of the KITTI sequence 05 scan context transformation. (<b>a</b>) 3D point cloud. (<b>b</b>) Scan context.</p>
Full article ">Figure 6
<p>Scan context algorithm overview [<a href="#B30-sensors-24-07214" class="html-bibr">30</a>]. Copyright © 2018, IEEE.</p>
Full article ">Figure 7
<p>Frames 61 and 1105 of the KITTI sequence 05 scan context transformation. (<b>a</b>) The frame 61 scan context. (<b>b</b>) The frame 1105 scan context. (<b>c</b>) The Scan Context after translation of frame 1105.</p>
Full article ">Figure 8
<p>Schematic diagram of translation search method with prior information.</p>
Full article ">Figure 9
<p>The mapping effects of different methods on KITTI sequence 05. (<b>a</b>) 3D mapping of the method proposed in this paper. (<b>b</b>) 3D mapping of LVI-SAM. (<b>c</b>) 3D map construction details of the method proposed in this paper. (<b>d</b>) 3D map construction details of LVI-SAM.</p>
Full article ">Figure 10
<p>Comparison of trajectories using different methods on the KITTI sequence 05. (<b>a</b>) Comparison of trajectories on the x-y plane. (<b>b</b>) Comparison of trajectories in the x-, y-, and z-directions.</p>
Full article ">Figure 11
<p>Comparison of APE at various time points on KITTI sequence 05 (/m).</p>
Full article ">Figure 12
<p>Comparison of trajectories using different methods on the M2DGR sequence Street07.</p>
Full article ">Figure 13
<p>Comparison of APE at various time points on the M2DGR sequence Street07 (/m).</p>
Full article ">Figure 14
<p>The mapping effects of different methods in real-world scenarios. (<b>a</b>) 3D mapping of the method proposed in this paper. (<b>b</b>) 3D mapping of LVI-SAM. (<b>c</b>) 3D map construction details of the method proposed in this paper. (<b>d</b>) 3D map construction details of LVI-SAM.</p>
Full article ">Figure 15
<p>Comparison of trajectories using different methods in real-world scenarios.</p>
Full article ">Figure 16
<p>Movement speed using different methods at various times under real-world scenarios.</p>
Full article ">
Back to TopTop