Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

Search Results (261)

Search Parameters:
Journal = Biomimetics
Section = Locomotion and Bioinspired Robotics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 11023 KiB  
Article
Online Traffic Crash Risk Inference Method Using Detection Transformer and Support Vector Machine Optimized by Biomimetic Algorithm
by Bihui Zhang, Zhuqi Li, Bingjie Li, Jingbo Zhan, Songtao Deng and Yi Fang
Biomimetics 2024, 9(11), 711; https://doi.org/10.3390/biomimetics9110711 - 19 Nov 2024
Viewed by 555
Abstract
Despite the implementation of numerous interventions to enhance urban traffic safety, the estimation of the risk of traffic crashes resulting in life-threatening and economic costs remains a significant challenge. In light of the above, an online inference method for traffic crash risk based [...] Read more.
Despite the implementation of numerous interventions to enhance urban traffic safety, the estimation of the risk of traffic crashes resulting in life-threatening and economic costs remains a significant challenge. In light of the above, an online inference method for traffic crash risk based on the self-developed TAR-DETR and WOA-SA-SVM methods is proposed. The method’s robust data inference capabilities can be applied to autonomous mobile robots and vehicle systems, enabling real-time road condition prediction, continuous risk monitoring, and timely roadside assistance. First, a self-developed dataset for urban traffic object detection, named TAR-1, is created by extracting traffic information from major roads around Hainan University in China and incorporating Russian car crash news. Secondly, we develop an innovative Context-Guided Reconstruction Feature Network-based Urban Traffic Objects Detection Model (TAR-DETR). The model demonstrates a detection accuracy of 76.8% for urban traffic objects, which exceeds the performance of other state-of-the-art object detection models. The TAR-DETR model is employed in TAR-1 to extract urban traffic risk features, and the resulting feature dataset was designated as TAR-2. TAR-2 comprises six risk features and three categories. A new inference algorithm based on WOA-SA-SVM is proposed to optimize the parameters (C, g) of the SVM, thereby enhancing the accuracy and robustness of urban traffic crash risk inference. The algorithm is developed by combining the Whale Optimization Algorithm (WOA) and Simulated Annealing (SA), resulting in a Hybrid Bionic Intelligent Optimization Algorithm. The TAR-2 dataset is inputted into a Support Vector Machine (SVM) optimized using a hybrid algorithm and used to infer the risk of urban traffic crashes. The proposed WOA-SA-SVM method achieves an average accuracy of 80% in urban traffic crash risk inference. Full article
(This article belongs to the Special Issue Optimal Design Approaches of Bioinspired Robots)
Show Figures

Figure 1

Figure 1
<p>Example of TAR-1 dataset and recording data area. (<b>a</b>–<b>f</b>) Example images; (<b>g</b>) recording data area.</p>
Full article ">Figure 2
<p>Example of TAR-2. (<b>a</b>–<b>i</b>) Example collision images.</p>
Full article ">Figure 3
<p>MixUp technology implementation process.</p>
Full article ">Figure 4
<p>Methods for extracting distances between targets.</p>
Full article ">Figure 5
<p>Traffic crash risk inference framework: data acquisition, object detection, and automatic inference.</p>
Full article ">Figure 6
<p>The overall network framework of TAR-DETR encompasses four principal modules: the Rectangular Self-Calibration Module, the Dynamic Interpolation Fusion Module, the Fuse Block Multi-Module, and the Pyramid Context Extraction Module. A mechanism for coordinated attention is incorporated into the backbone network. P<sub>3</sub>, P<sub>4</sub>, and P<sub>5</sub> represent feature maps derived from disparate levels of the backbone network.</p>
Full article ">Figure 7
<p>The overall network framework of CGRFN.</p>
Full article ">Figure 8
<p>The network framework of the PCE.</p>
Full article ">Figure 9
<p>The network framework of RCM.</p>
Full article ">Figure 10
<p>The network framework of FBM and DIF modules. (<b>a</b>) FBM; (<b>b</b>) DIF module.</p>
Full article ">Figure 11
<p>The framework of WOA-SA-SVM. (<b>a</b>) SA algorithm; (<b>b</b>) WOA; (<b>c</b>) SVM algorithm.</p>
Full article ">Figure 12
<p>Training and validation convergence curve. (<b>a</b>) Train GIoU loss curve; (<b>b</b>) Train IoU loss curve; (<b>c</b>) precision curve; (<b>d</b>) recall curve; (<b>e</b>) Val GIoU loss curve; (<b>f</b>) Val IoU loss curve; (<b>g</b>) mAP<sub>50</sub> curve; and (<b>h</b>) mAP<sub>50–90</sub> curve.</p>
Full article ">Figure 13
<p>Compared to previously advanced real-time object detectors, our TAR-DETR achieves state-of-the-art performance.</p>
Full article ">Figure 14
<p>Object detection results of TAR-DETR. (<b>a</b>–<b>c</b>) Small number of objects; (<b>d</b>–<b>f</b>) large number of objects; (<b>g</b>–<b>i</b>) different traffic crash objects.</p>
Full article ">Figure 15
<p>Analysis of TAR–2 traffic crash risk dataset. (<b>a</b>) Correlation matrix for TAR–2 dataset; (<b>b</b>) percentage of different categories in TAR–2 dataset.</p>
Full article ">Figure 16
<p>WOA-SA solution results for four typical test functions. (<b>a</b>) Sphere; (<b>b</b>) Rosenbrock; (<b>c</b>) Rastrigin; (<b>d</b>) Ackley.</p>
Full article ">Figure 17
<p>The inference results of TAR-2. (<b>a</b>) The precision of inference for each of the three categories; (<b>b</b>) the confusion matrix for WOA-SA-SVM.</p>
Full article ">Figure 18
<p>The instances of wrong inference (“Collision” to “Dangerous”). (<b>a</b>–<b>f</b>) Collision instances are inferred as dangerous in various environments.</p>
Full article ">Figure 19
<p>The instances of wrong inference (“Collision” to “Safe”). (<b>a</b>–<b>f</b>) Collision instances are inferred as safe in various environments.</p>
Full article ">
23 pages, 10315 KiB  
Article
The Design and Adaptive Control of a Parallel Chambered Pneumatic Muscle-Driven Soft Hand Robot for Grasping Rehabilitation
by Zhixiong Zhou, Qingsong Ai, Mengnan Li, Wei Meng, Quan Liu and Sheng Quan Xie
Biomimetics 2024, 9(11), 706; https://doi.org/10.3390/biomimetics9110706 - 18 Nov 2024
Viewed by 398
Abstract
The widespread application of exoskeletons driven by soft actuators in motion assistance and medical rehabilitation has proven effective for patients who struggle with precise object grasping and suffer from insufficient hand strength due to strokes or other conditions. Repetitive passive flexion/extension exercises and [...] Read more.
The widespread application of exoskeletons driven by soft actuators in motion assistance and medical rehabilitation has proven effective for patients who struggle with precise object grasping and suffer from insufficient hand strength due to strokes or other conditions. Repetitive passive flexion/extension exercises and active grasp training are known to aid in the restoration of motor nerve function. However, conventional pneumatic artificial muscles (PAMs) used for hand rehabilitation typically allow for bending in only one direction, thereby limiting multi-degree-of-freedom movements. Moreover, establishing precise models for PAMs is challenging, making accurate control difficult to achieve. To address these challenges, we explored the design and fabrication of a bidirectionally bending PAM. The design parameters were optimized based on actual rehabilitation needs and a finite element analysis. Additionally, a dynamic model for the PAM was established using elastic strain energy and the Lagrange equation. Building on this, an adaptive position control method employing a radial basis function neural network, optimized for parameters and hidden layer nodes, was developed to enhance the accuracy of these soft PAMs in assisting patients with hand grasping. Finally, a wearable soft hand rehabilitation exoskeleton was designed, offering two modes, passive training and active grasp, aimed at helping patients regain their grasp ability. Full article
(This article belongs to the Special Issue Human-Inspired Grasp Control in Robotics)
Show Figures

Figure 1

Figure 1
<p>Hand bone structure and joint freedom.</p>
Full article ">Figure 2
<p>PAM structure design.</p>
Full article ">Figure 3
<p>Finite element simulation boundary conditions and load settings.</p>
Full article ">Figure 4
<p>PAM manufacturing process.</p>
Full article ">Figure 5
<p>Soft hand exoskeleton.</p>
Full article ">Figure 6
<p>Block diagram of PAM dynamic derivation process.</p>
Full article ">Figure 7
<p>PAM geometry parameter model.</p>
Full article ">Figure 8
<p>Structure diagram of RBFNN.</p>
Full article ">Figure 9
<p>Closed-loop system RBFNN control block diagram.</p>
Full article ">Figure 10
<p>The hand exoskeleton system.</p>
Full article ">Figure 11
<p>Motion capture system.</p>
Full article ">Figure 12
<p>Finite element simulation results: (<b>a</b>,<b>b</b>) flexion/extension of PAMs with different chamber shapes; (<b>c</b>,<b>d</b>) flexion/extension of PAMs with different length; (<b>e</b>,<b>f</b>) flexion/extension of PAMs with different radius; and (<b>g</b>,<b>h</b>) flexion/extension of PAMs with different spiral angle.</p>
Full article ">Figure 13
<p>Finite element analysis of flexion and extension.</p>
Full article ">Figure 14
<p>PAM performance testing: (<b>a</b>) bending angle; (<b>b</b>) fingertip force.</p>
Full article ">Figure 15
<p>Impact of implicit layer nodes on initial trajectory tracking.</p>
Full article ">Figure 16
<p>Influence of implicit layer nodes on stable trajectory tracking.</p>
Full article ">Figure 17
<p>Comparison of different algorithms in PAM angle tracking control.</p>
Full article ">Figure 18
<p>Angle tracking result: (<b>a</b>) thumb; (<b>b</b>) index finger; (<b>c</b>) middle finger; and (<b>d</b>) litter finger.</p>
Full article ">Figure 19
<p>The angle trajectory of grasping objects with healthy hands: (<b>a</b>) the thumb and little finger; (<b>b</b>) the index finger, middle finger, and ring finger.</p>
Full article ">Figure 20
<p>Grasp an apple and an orange with an exoskeleton.</p>
Full article ">Figure 21
<p>Index finger trajectory tracking in grasping: (<b>a</b>) bending angle; (<b>b</b>) error.</p>
Full article ">
16 pages, 6371 KiB  
Article
A Dynamic Interference Detection Method of Underwater Scenes Based on Deep Learning and Attention Mechanism
by Shuo Shang, Jianrong Cao, Yuanchang Wang, Ming Wang, Qianchuan Zhao, Yuanyuan Song and He Gao
Biomimetics 2024, 9(11), 697; https://doi.org/10.3390/biomimetics9110697 - 14 Nov 2024
Viewed by 352
Abstract
Improving the three-dimensional reconstruction of underwater scenes is a challenging and hot topic in the field of underwater robot vision system research. High dynamic interference underwater has always been one of the key issues affecting the 3D reconstruction of underwater scenes. However, due [...] Read more.
Improving the three-dimensional reconstruction of underwater scenes is a challenging and hot topic in the field of underwater robot vision system research. High dynamic interference underwater has always been one of the key issues affecting the 3D reconstruction of underwater scenes. However, due to the complex underwater environment and insufficient light, existing target detection algorithms cannot meet the requirements. This paper uses the YOLOv8 network as the basis of the algorithm and proposes an underwater dynamic target detection algorithm based on improved YOLOv8. This algorithm first improves the feature extraction layer of the YOLOv8 network, improves the convolutional network structure of Bottleneck, reduces the amount of calculation and improves detection accuracy. Secondly, it adds an improved SE attention mechanism to make the network have a better feature extraction effect; in addition, the confidence box loss function of the network is improved, and the CIoU loss function is replaced by the MPDIoU loss function, which effectively improves the model convergence speed. Experimental results show that the mAP value of the improved YOLOv8 underwater dynamic target detection algorithm proposed in this article can reach 95.1%, and it can detect underwater dynamic targets more accurately, especially small dynamic targets in complex underwater scenes. Full article
(This article belongs to the Special Issue Bionic Robotic Fish: 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>YOLOv8 network framework. The arrows in the diagram indicate sequential execution, where each module is performed one after the other upon completion of the previous module. In the upper-right corner of the diagram, the detailed components of each module are shown: the purple represents the detailed components of the Detect module, the light green represents the detailed components of the Conv module, the light gray represents the detailed components of the SPPF module, the blue represents the detailed components of the C2f module, and the yellow represents the detailed components of the Bottleneck module within the C2f module.</p>
Full article ">Figure 2
<p>PConv (Partial Convolution) structure.</p>
Full article ">Figure 3
<p>Improved Bottleneck structure.</p>
Full article ">Figure 4
<p>SE-net structure.</p>
Full article ">Figure 5
<p>ReLU function curve.</p>
Full article ">Figure 6
<p>PReLU function curve.</p>
Full article ">Figure 7
<p>MPDIoU parameter diagram.</p>
Full article ">Figure 8
<p>Dynamic target detection dataset.</p>
Full article ">Figure 9
<p>Original YOLOv8 network training result graph.</p>
Full article ">Figure 10
<p>Improved YOLOv8 network training result graph.</p>
Full article ">Figure 11
<p>The original images.</p>
Full article ">Figure 12
<p>The detection results using the unmodified YOLOv8 algorithm.</p>
Full article ">Figure 13
<p>The detection results using the proposed modified YOLOv8 algorithm.</p>
Full article ">
16 pages, 9423 KiB  
Article
EchoPT: A Pretrained Transformer Architecture That Predicts 2D In-Air Sonar Images for Mobile Robotics
by Jan Steckel, Wouter Jansen and Nico Huebel
Biomimetics 2024, 9(11), 695; https://doi.org/10.3390/biomimetics9110695 - 13 Nov 2024
Viewed by 467
Abstract
The predictive brain hypothesis suggests that perception can be interpreted as the process of minimizing the error between predicted perception tokens generated via an internal world model and actual sensory input tokens. When implementing working examples of this hypothesis in the context of [...] Read more.
The predictive brain hypothesis suggests that perception can be interpreted as the process of minimizing the error between predicted perception tokens generated via an internal world model and actual sensory input tokens. When implementing working examples of this hypothesis in the context of in-air sonar, significant difficulties arise due to the sparse nature of the reflection model that governs ultrasonic sensing. Despite these challenges, creating consistent world models using sonar data is crucial for implementing predictive processing of ultrasound data in robotics. In an effort to enable robust robot behavior using ultrasound as the sole exteroceptive sensor modality, this paper introduces EchoPT (Echo-Predicting Pretrained Transformer), a pretrained transformer architecture designed to predict 2D sonar images from previous sensory data and robot ego-motion information. We detail the transformer architecture that drives EchoPT and compare the performance of our model to several state-of-the-art techniques. In addition to presenting and evaluating our EchoPT model, we demonstrate the effectiveness of this predictive perception approach in two robotic tasks. Full article
(This article belongs to the Special Issue Artificial Intelligence for Autonomous Robots: 3rd Edition)
Show Figures

Figure 1

Figure 1
<p>Overview of the experimental setup. Panel (<b>a</b>) shows the simulation environment in which a two-wheeled robot drives. A sketch of the robot is shown in panel (<b>c</b>). The robot uses an array-based imaging sonar sensor panel (<b>g</b>) capable of generating range-direction energy maps (called energyscapes), shown in panels (<b>d</b>–<b>f</b>). This sensor is modeled in the simulation environment based on accurate models of acoustic propagation and reflection. Panel (<b>b</b>) shows what is called the acoustic flow model. This model predicts how objects in the sensor scene move through the perceptive field based on a certain robot motion. The blue flow lines are shown for a linear robot motion. Panels (<b>d</b>–<b>f</b>) show the task that is being solved in this paper: how can novel sensor views be synthesized given a certain set of robot velocity commands <math display="inline"><semantics> <mfenced open="[" close="]"> <mtable> <mtr> <mtd> <msub> <mi>v</mi> <mrow> <mi>l</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </mtd> <mtd> <msub> <mi>ω</mi> <mi>r</mi> </msub> </mtd> </mtr> </mtable> </mfenced> </semantics></math>? Each of these velocities has a time-step index, as shown in panels (<b>d</b>–<b>f</b>). Panel (<b>d</b>) shows the prediction based on the naive shifting of the image in the range and direction dimensions. Panel (<b>e</b>) shows the operation using the acoustic flow model of panel (<b>b</b>). Both of these operators can only use the last frame to perform the prediction. Panel (<b>f</b>) shows the EchoPT model, which takes in <span class="html-italic">n</span> previous frames and velocity commands and predicts the novel view using a transformer neural network.</p>
Full article ">Figure 2
<p>Overview of the network architecture of EchoPT. The EchoPT model has two inputs: the set of <span class="html-italic">n</span> previous input frames (set to three in this paper) and the <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </semantics></math> velocity commands (three previous and one for the prediction). The model has three main parallel branches: a transformer branch, a feed-forward convolutional branch for the sonar images, and an MLP (multi-layer perceptron) pipeline using the velocity commands as input. These three branches are depth-concatenated and passed through more feed-forward convolutional layers to obtain a single output image.</p>
Full article ">Figure 3
<p>Condensed version of <a href="#biomimetics-09-00695-f0A1" class="html-fig">Figure A1</a> in <a href="#app1-biomimetics-09-00695" class="html-app">Appendix A</a>. Panel (<b>a</b>) shows the target sonar image, and panel (<b>b</b>) shows the predicted image. Panel (<b>c</b>) shows the difference between the two images, and panels (<b>d</b>,<b>e</b>) show the 2D correlogram.</p>
Full article ">Figure 4
<p>Prediction results of a single frame using three prediction methods: the naive operation, which shifts the image in the range and direction dimensions; the acoustic flow approach, which uses the acoustic flow equations to transform the image; and finally, the EchoPT prediction.</p>
Full article ">Figure 5
<p>A first application of predictive processing in which a robot performs a trajectory in the environment from <a href="#biomimetics-09-00695-f001" class="html-fig">Figure 1</a>. In two periods (between 10 s and 16 s and between 30 s and 36 s), the robot encounters slip conditions (meaning the robot is not performing the motion that the robot expects to perform). In the first section, the robot is slipping on both wheels; in the second condition, only one wheel slips. The plots show the slip detector, which uses differences in the predicted and measured sensor data for different prediction horizons (one-shot, three-frame auto-regressive, and five-frame auto-regressive). Longer time horizons provide the clearest slip detection signal, with EchoPT being the only one that detects the second slip condition. Panel (<b>a</b>) shows the results for using the naive predictor, panel (<b>b</b>) for the acoustic flow predictor and panel (<b>c</b>) for the EchoPT predictor.</p>
Full article ">Figure 6
<p>A second application of predictive processing in which a robot is tasked with driving from the green rectangular spawn boxes to the waypoint indicated by the green circles, using a subsumption-based control stack described in [<a href="#B13-biomimetics-09-00695" class="html-bibr">13</a>]. Panel (<b>a</b>) shows the kernel density estimate of 50 runs with clean sensor data (signal-to-noise ratio, SNR = 5 dB). In panels (<b>b</b>,<b>c</b>), we added intermittent noise to the measured sensor data (shown in panel f, SNR = −80 dB). In panel (<b>b</b>), the original controller was used, showing the traversed paths’ deterioration. In panel (<b>c</b>), sensor data were predicted in an auto-regressive manner using EchoPT for the duration of the noise bursts and fed into the controller instead of the noisy data. Panel (<b>d</b>) shows the travel time for the robot in the three conditions, showing a large increase in travel time for the controller from panel (<b>b</b>). Panel (<b>e</b>) shows the deviation from the midline of the corridor, again showing a large deviation when no predictive processing is used. Panel (<b>f</b>) shows a small section of the evolution of the SNR over time.</p>
Full article ">Figure A1
<p>Detailed overview of some EchoPT predictions. Given a sequence of sonar images, T1 to T4 (panels (<b>a</b>–<b>d</b>)), with a robot performing a linear motion in a corridor, the EchoPT model predicts T4 (predicted) in panel (<b>e</b>). Panels (<b>f</b>–<b>i</b>) show the difference between T4 (predicted) and T1 to T4. These plots show that the model can capture the motion model of the sensor modality, as the errors between T4 and T4 (predicted) are near zero. The differences with the older images clearly show that the robot has learned to incorporate the sensor flow data. Panels (<b>j</b>–<b>n</b>) show the 2D correlograms between the prediction and the input data.</p>
Full article ">Figure A2
<p>Prediction of sonar images using an auto-regressive prediction model for the three prediction systems used in this paper (naive, acoustic flow, and EchoPT). As the robot motions are relatively small, the difference between the images is not clearly visible. In <a href="#biomimetics-09-00695-f0A3" class="html-fig">Figure A3</a>, we show the differences between the subsequent images, as this illustrates much more clearly what the advantage of the EchoPT model is over the other techniques.</p>
Full article ">Figure A3
<p>Prediction errors using an auto-regressive prediction model for the three prediction systems described. The deeper the prediction horizon, the larger the errors in the data predictions get (very noticeable in frame 6). The EchoPT model maintains the smallest prediction errors, indicating the capability of the model to perform predictions over long time horizons. It should be noted that, after frame 3, no measured data are used in EchoPT, but it purely relies on previous predictions to estimate the new data frame.</p>
Full article ">
29 pages, 5444 KiB  
Article
Task Allocation and Sequence Planning for Human–Robot Collaborative Disassembly of End-of-Life Products Using the Bees Algorithm
by Jun Huang, Sheng Yin, Muyao Tan, Quan Liu, Ruiya Li and Duc Pham
Biomimetics 2024, 9(11), 688; https://doi.org/10.3390/biomimetics9110688 - 11 Nov 2024
Viewed by 636
Abstract
Remanufacturing, which benefits the environment and saves resources, is attracting increasing attention. Disassembly is arguably the most critical step in the remanufacturing of end-of-life (EoL) products. Human–robot collaborative disassembly as a flexible semi-automated approach can increase productivity and relieve people of tedious, laborious, [...] Read more.
Remanufacturing, which benefits the environment and saves resources, is attracting increasing attention. Disassembly is arguably the most critical step in the remanufacturing of end-of-life (EoL) products. Human–robot collaborative disassembly as a flexible semi-automated approach can increase productivity and relieve people of tedious, laborious, and sometimes hazardous jobs. Task allocation in human–robot collaborative disassembly involves methodically assigning disassembly tasks to human operators or robots. However, the schemes for task allocation in recent studies have not been sufficiently refined and the issue of component placement after disassembly has not been fully addressed in recent studies. This paper presents a method of task allocation and sequence planning for human–robot collaborative disassembly of EoL products. The adopted criteria for human–robot disassembly task allocation are introduced. The disassembly of each component includes dismantling and placing. The performance of a disassembly plan is evaluated according to the time, cost, and utility value. A discrete Bees Algorithm using genetic operators is employed to optimise the generated human–robot collaborative disassembly solutions. The proposed task allocation and sequence planning method is validated in two case studies involving an electric motor and a power battery from an EoL vehicle. The results demonstrate the feasibility of the proposed method for planning and optimising human–robot collaborative disassembly solutions. Full article
(This article belongs to the Special Issue Intelligent Human–Robot Interaction: 3rd Edition)
Show Figures

Figure 1

Figure 1
<p>The workflow of the proposed method.</p>
Full article ">Figure 2
<p>The workflow of IDBA.</p>
Full article ">Figure 3
<p>Swap operator in disassembly solution.</p>
Full article ">Figure 4
<p>Insert operator in disassembly solution.</p>
Full article ">Figure 5
<p>Genetic mutation in disassembly solution.</p>
Full article ">Figure 6
<p>Structure and combined cost constituting individual bees.</p>
Full article ">Figure 7
<p>The setting of the forbidden direction.</p>
Full article ">Figure 8
<p>The Gantt chart of HRCD.</p>
Full article ">Figure 9
<p>Photograph and exploded view of an electric motor. (<b>a</b>) Photograph. (<b>b</b>) Exploded view.</p>
Full article ">Figure 10
<p>Iterative diagram (electric motor, balance mode). (<b>a</b>) Minimum combined cost. (<b>b</b>) Iterative scatter plot.</p>
Full article ">Figure 11
<p>The Gantt chart of the optimal disassembly solution of the electric motor.</p>
Full article ">Figure 12
<p>Photograph and exploded view of the power battery. (<b>a</b>) Photograph. (<b>b</b>) Exploded view.</p>
Full article ">Figure 13
<p>Iterative diagram (power battery, balance mode). (<b>a</b>) Minimum combined cost. (<b>b</b>) Iterative scatter plot.</p>
Full article ">Figure 14
<p>Gantt chart of the optimised disassembly solution of the power battery.</p>
Full article ">Figure 15
<p>IDBA performance for different population sizes and iterations. (<b>a</b>) Average running time (electric motor). (<b>b</b>) Minimum combined cost (electric motor). (<b>c</b>) Average running time (power battery). (<b>d</b>) Minimum combined cost (power battery).</p>
Full article ">Figure 16
<p>Performance comparisons of different optimisation algorithms (power battery case study). (<b>a</b>) Average running time for different population sizes. (<b>b</b>) Minimum combined cost for different population sizes. (<b>c</b>) Average running time for different numbers of iterations. (<b>d</b>) Minimum combined cost for different numbers of iterations.</p>
Full article ">
34 pages, 11454 KiB  
Article
Compassionate Care with Autonomous AI Humanoid Robots in Future Healthcare Delivery: A Multisensory Simulation of Next-Generation Models
by Joannes Paulus Tolentino Hernandez
Biomimetics 2024, 9(11), 687; https://doi.org/10.3390/biomimetics9110687 - 11 Nov 2024
Viewed by 893
Abstract
The integration of AI and robotics in healthcare raises concerns, and additional issues regarding autonomous systems are anticipated. Effective communication is crucial for robots to be seen as “caring”, necessitating advanced mechatronic design and natural language processing (NLP). This paper examines the potential [...] Read more.
The integration of AI and robotics in healthcare raises concerns, and additional issues regarding autonomous systems are anticipated. Effective communication is crucial for robots to be seen as “caring”, necessitating advanced mechatronic design and natural language processing (NLP). This paper examines the potential of humanoid robots to autonomously replicate compassionate care. The study employs computational simulations using mathematical and agent-based modeling to analyze human–robot interactions (HRIs) surpassing Tetsuya Tanioka’s TRETON. It incorporates stochastic elements (through neuromorphic computing) and quantum-inspired concepts (through the lens of Martha Rogers’ theory), running simulations over 100 iterations to analyze complex behaviors. Multisensory simulations (visual and audio) demonstrate the significance of “dynamic communication”, (relational) “entanglement”, and (healthcare system and robot’s function) “superpositioning” in HRIs. Quantum and neuromorphic computing may enable humanoid robots to empathetically respond to human emotions, based on Jean Watson’s ten caritas processes for creating transpersonal states. Autonomous AI humanoid robots will redefine the norms of “caring”. Establishing “pluralistic agreements” through open discussions among stakeholders worldwide is necessary to align innovations with the values of compassionate care within a “posthumanist” framework, where the compassionate care provided by Level 4 robots meets human expectations. Achieving compassionate care with autonomous AI humanoid robots involves translating nursing, communication, computer science, and engineering concepts into robotic care representations while considering ethical discourses through collaborative efforts. Nurses should lead the design and implementation of AI and robots guided by “technological knowing” in Rozzano Locsin’s TCCN theory. Full article
(This article belongs to the Special Issue Optimal Design Approaches of Bioinspired Robots)
Show Figures

Figure 1

Figure 1
<p>Interpretation of Tanioka’s [<a href="#B10-biomimetics-09-00687" class="html-bibr">10</a>] model according to cybernetic HRI communication [<a href="#B92-biomimetics-09-00687" class="html-bibr">92</a>].</p>
Full article ">Figure 2
<p>Communication in “Level 3” HRI [<a href="#B92-biomimetics-09-00687" class="html-bibr">92</a>].</p>
Full article ">Figure 3
<p>Model validation for “Level 3” HRI [<a href="#B92-biomimetics-09-00687" class="html-bibr">92</a>].</p>
Full article ">Figure 4
<p>The representation of dissonance with “Level 3” HRI [<a href="#B92-biomimetics-09-00687" class="html-bibr">92</a>]. (Download the file at <a href="https://github.com/jphernandezrn/Data-Sonification-Human-Robot-Interaction" target="_blank">https://github.com/jphernandezrn/Data-Sonification-Human-Robot-Interaction</a> (accessed on 25 August 2024).</p>
Full article ">Figure 5
<p>The representation of Level 4 HRI. (Note: The mathematics in quantum communication is referenced from Yuan and Cheng [<a href="#B94-biomimetics-09-00687" class="html-bibr">94</a>], when discussing fidelity).</p>
Full article ">Figure 6
<p>The communication, entanglement, and superpositioning of the three states.</p>
Full article ">Figure 7
<p>Model validation involving overlapping states.</p>
Full article ">Figure 8
<p>The sonification of frequencies between states exhibiting quantum relationships. (Download the file at <a href="https://github.com/jphernandezrn/Data-Sonification-Human-Robot-Interaction" target="_blank">https://github.com/jphernandezrn/Data-Sonification-Human-Robot-Interaction</a>).</p>
Full article ">Figure 9
<p>An intuitive, self-regulating, and agile robot system architecture through steps 1–9. Note: <sup>a</sup> Information processing must be dynamic, symbolically instantiated (unsupervised), and evolving (unbounded materially) through <sup>c</sup> “state transition” (the humanoid robot’s conditions based on actions or events). Unbounded transitions refer to a system’s capacity for an unlimited number of transitions between states, often occurring when the conditions for transitions are not strictly defined or when the system can respond to a wide variety of inputs. In the real world, second-order cybernetics [<a href="#B99-biomimetics-09-00687" class="html-bibr">99</a>] should allow the operation of artificial cognition that is fluid and capable of co-creating knowledge within the healthcare network. <sup>b</sup> Alternatively, it can involve the construction and decomposition of “information granules” (the chunks of information) [<a href="#B95-biomimetics-09-00687" class="html-bibr">95</a>], applicable to both algorithmic (deductive) and non-algorithmic (inductive and abductive) computing using quantum logic. This process evolves through machine learning with quantum logic.</p>
Full article ">Figure 10
<p>Care actions and intentionality construed from wave function collapse.</p>
Full article ">Figure 11
<p>Model validation using machine learning.</p>
Full article ">Figure 12
<p>The data sonification of simulated care actions. Download the file at <a href="https://github.com/jphernandezrn/Data-Sonification-Human-Robot-Interaction" target="_blank">https://github.com/jphernandezrn/Data-Sonification-Human-Robot-Interaction</a> (accessed on 25 August 2024).</p>
Full article ">Figure 13
<p>The spectrogram comparison of the three audio files.</p>
Full article ">Figure 14
<p>The mathematical model simulation of “stochasticity” and “intentionality” in the humanoid robot. Note: The blue line represents the relationship between “stochasticity” and “intentionality” in a neuromorphic circuit, as modeled by the equation <span class="html-italic">I</span> = 0.5278 + 0.0666<span class="html-italic">S</span> − 0.0565<span class="html-italic">S</span><sup>2</sup>.) The pattern exhibits three distinct phases: Initial Rise (0.0 to ~0.45); Peak Plateau (~0.45 to ~0.8); and Final Decline (~0.8 to 1.0).</p>
Full article ">Figure 15
<p>The mathematical model simulation of adaptive learning in the humanoid robot. Note: The blue line (“Initial”) shows the robot’s behavior before learning, characterized by jagged fluctuations due to varying levels of randomness (stochasticity). In contrast, the red line (“After Learning”) presents a smoother curve with less variability, indicating enhanced stability after learning. Both lines begin at around 0.5275 intentionality, peak at approximately 0.5475 at “medium stochasticity” (0.6), where there is a balanced mix of predictability and unpredictability, and then decline as stochasticity approaches 1.0. The main difference is that the red line represents a more optimized response, showing that adaptive learning has resulted in more controlled and predictable behavior while maintaining the relationship between “stochasticity” and “intentionality”.</p>
Full article ">Figure 16
<p>Neuromorphic circuit design.</p>
Full article ">Figure 17
<p>Quantum-neuromorphic circuit design.</p>
Full article ">Figure 18
<p>Quantum-neuromorphic circuit simulation.</p>
Full article ">Figure 19
<p>The data sonification of the quantum-neuromorphic circuit simulation. Note: The ‘x’ symbols in (<b>A</b>) mark the peak amplitudes of the quantum-neuromorphic circuit’s waveform, indicating moments of maximum oscillation in the system’s behavior. (Download the file at <a href="https://github.com/jphernandezrn/Data-Sonification-Human-Robot-Interaction" target="_blank">https://github.com/jphernandezrn/Data-Sonification-Human-Robot-Interaction</a>).</p>
Full article ">
15 pages, 3053 KiB  
Article
Bipedal Stepping Controller Design Considering Model Uncertainty: A Data-Driven Perspective
by Chao Song, Xizhe Zang, Boyang Chen, Shuai Heng, Changle Li, Yanhe Zhu and Jie Zhao
Biomimetics 2024, 9(11), 681; https://doi.org/10.3390/biomimetics9110681 - 7 Nov 2024
Viewed by 464
Abstract
This article introduces a novel perspective on designing a stepping controller for bipedal robots. Typically, designing a state-feedback controller to stabilize a bipedal robot to a periodic orbit of step-to-step (S2S) dynamics based on a reduced-order model (ROM) can achieve stable walking. However, [...] Read more.
This article introduces a novel perspective on designing a stepping controller for bipedal robots. Typically, designing a state-feedback controller to stabilize a bipedal robot to a periodic orbit of step-to-step (S2S) dynamics based on a reduced-order model (ROM) can achieve stable walking. However, the model discrepancies between the ROM and the full-order dynamic system are often ignored. We introduce the latest results from behavioral systems theory by directly constructing a robust stepping controller using input-state data collected during flat-ground walking with a nominal controller in the simulation. The model uncertainty discrepancies are equivalently represented as bounded noise and over-approximated by bounded energy ellipsoids. We conducted extensive walking experiments in a simulation on a 22-degrees-of-freedom small humanoid robot, verifying that it demonstrates superior robustness in handling uncertain loads, various sloped terrains, and push recovery compared to the nominal S2S controller. Full article
(This article belongs to the Special Issue Bio-Inspired Locomotion and Manipulation of Legged Robot: 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>BRUCE carrying payload and walking up a slope.</p>
Full article ">Figure 2
<p>Configuration of BRUCE robot.</p>
Full article ">Figure 3
<p>The proposed control framework.</p>
Full article ">Figure 4
<p>The success rate of walking forward for 20 m with different stepping controllers.</p>
Full article ">Figure 5
<p>A heat map of maximum walking speeds under varying slopes: (<b>a</b>) HLIP stepping controller; (<b>b</b>) RDDC stepping controller.</p>
Full article ">Figure 6
<p>Push recovery of 30 N in <span class="html-italic">X</span>-direction: (<b>a</b>) HLIP stepping controller; (<b>b</b>) RDDC stepping controller.</p>
Full article ">Figure 7
<p>Push recovery of 45 N in <span class="html-italic">Y</span>-direction: (<b>a</b>) HLIP stepping controller; (<b>b</b>) RDDC stepping controller.</p>
Full article ">Figure 8
<p>The differences between the direct and indirect data-driven control frameworks are as follows. In direct data-driven control, the matrix family <math display="inline"><semantics> <mi mathvariant="script">C</mi> </semantics></math> is used only as an intermediate parameter, represented by bounded energy ellipsoidal parameterization, and is not solved explicitly. This represents a fundamental difference from indirect data-driven control.</p>
Full article ">Figure 9
<p>The results in figures (<b>a</b>–<b>c</b>) show the velocity responses of the HLIP after walking within the same speed range, subjected to disturbances with added noise ranges with absolute values 0.04, 0.042, and 0.043, respectively. The ellipsoid energy level matches the noise range in each case. Figure (<b>d</b>) illustrates that when the noise range is 0.04, the system diverges under disturbances of the corresponding magnitude.</p>
Full article ">Figure 10
<p>Sim-to-sim experiments in Mujoco using a stepping controller built from Gazebo data of varying lengths: (<b>a</b>) 500 steps, (<b>b</b>) 1000 steps, (<b>c</b>) 3000 steps.The red circle and cross marks represent successful passes and failures due to falls, respectively.</p>
Full article ">
23 pages, 20937 KiB  
Article
Lunarminer Framework for Nature-Inspired Swarm Robotics in Lunar Water Ice Extraction
by Joven Tan, Noune Melkoumian, David Harvey and Rini Akmeliawati
Biomimetics 2024, 9(11), 680; https://doi.org/10.3390/biomimetics9110680 - 7 Nov 2024
Viewed by 628
Abstract
The Lunarminer framework explores the use of biomimetic swarm robotics, inspired by the division of labor in leafcutter ants and the synchronized flashing of fireflies, to enhance lunar water ice extraction. Simulations of water ice extraction within Shackleton Crater showed that the framework [...] Read more.
The Lunarminer framework explores the use of biomimetic swarm robotics, inspired by the division of labor in leafcutter ants and the synchronized flashing of fireflies, to enhance lunar water ice extraction. Simulations of water ice extraction within Shackleton Crater showed that the framework may improve task allocation, by reducing the extraction time by up to 40% and energy consumption by 31% in scenarios with high ore block quantities. This system, capable of producing up to 181 L of water per day from excavated regolith with a conversion efficiency of 0.8, may allow for supporting up to eighteen crew members. It has demonstrated robust fault tolerance and sustained operational efficiency, even for a 20% robot failure rate. The framework may help to address key challenges in lunar resource extraction, particularly in the permanently shadowed regions. To refine the proposed strategies, it is recommended that further studies be conducted on their large-scale applications in space mining operations at the Extraterrestrial Environmental Simulation (EXTERRES) laboratory at the University of Adelaide. Full article
(This article belongs to the Special Issue Recent Advances in Robotics and Biomimetics)
Show Figures

Figure 1

Figure 1
<p>Landing sites near Shackleton Crater on the lunar south pole, marked with blue squares, and highlighting the geological formations near Shackleton Crater [<a href="#B41-biomimetics-09-00680" class="html-bibr">41</a>].</p>
Full article ">Figure 2
<p>Bio-inspired design concepts for the Lunarminer framework. The orange boxes represent specific tasks, i.e. task allocation and material handling (inspired by leafcutter ants), and recruitment and fault tolerance (inspired by fireflies),which contribute to broader goals stated in the green boxes, such as efficient navigation, swarm automation, and resource handling.</p>
Full article ">Figure 3
<p>RASSOR 2.0 computer-aided design [<a href="#B14-biomimetics-09-00680" class="html-bibr">14</a>].</p>
Full article ">Figure 4
<p>Simulated virtual lunar environment. ROS simulation of Shackleton Crater’s floor (gray areas) with a central hub for collection (black circles), maintenance (yellow squares), base stations (green squares), processing (blue squares), mining (green areas), and transportation (blue areas). The robot fleet includes 4 orange explorers, 2 green excavators, 4 yellow haulers, and 2 blue transporters.</p>
Full article ">Figure 5
<p>Lunarminer finite state machine.</p>
Full article ">Figure 6
<p>(<b>a</b>) Strip search and piecewise tracking function for resource prospecting. The green arrows indicate the strip search path, while the red arrow highlights a prioritized direction or specific target location within the search area. Blue-shaded areas represent zones covered by individual units as they scan for resources; (<b>b</b>) fireflies’ bioluminescent function inspired recruitment protocol, where light beacons are placed at ore locations to signal and attract other units to ore locations.</p>
Full article ">Figure 7
<p>(<b>a</b>) Selection of the mining site based on light proximity and intensity, with red arrows indicating sensed skylight directions guiding site selection; (<b>b</b>) mining excavation process showing ore block detection and hauler positioning system, with light beacon areas representing operational zones.; and (<b>c</b>) division of labor in transporting ore blocks: yellow arrows indicate transport paths from the mine site to the central hub, and green arrows show paths from the central hub to the processing plant.</p>
Full article ">Figure 8
<p>(<b>a</b>) A red-light signal emitted by a malfunctioning robot, inspired by the flashing behavior of fireflies, with blue shaded areas representing the communication range of each robot; (<b>b</b>) activation of the fault-tolerance protocol to replace the malfunctioning robot, indicated by red arrows guiding the replacement robot toward its target within the blue communication zones. The base station and maintenance site are shown in green and yellow, respectively, facilitating the coordination of the replacement process.</p>
Full article ">Figure 9
<p>Highlights of various stages of the Lunarminer mining process from the exploration stage to the recovery stage.</p>
Full article ">Figure 10
<p>(<b>a</b>) Resource extraction time and (<b>b</b>) energy distribution across different scenarios.</p>
Full article ">Figure 11
<p>Fault tolerance and system robustness across three scenarios, i.e., normal, with failure, and with recovery settings.</p>
Full article ">Figure 12
<p>Lunarminer framework classification.</p>
Full article ">
21 pages, 36914 KiB  
Article
Development of a Novel Tailless X-Type Flapping-Wing Micro Air Vehicle with Independent Electric Drive
by Yixin Zhang, Song Zeng, Shenghua Zhu, Shaoping Wang, Xingjian Wang, Yinan Miao, Le Jia, Xinyu Yang and Mengqi Yang
Biomimetics 2024, 9(11), 671; https://doi.org/10.3390/biomimetics9110671 - 3 Nov 2024
Viewed by 630
Abstract
A novel tailless X-type flapping-wing micro air vehicle with two pairs of independent drive wings is designed and fabricated in this paper. Due to the complexity and unsteady of the flapping wing mechanism, the geometric and kinematic parameters of flapping wings significantly influence [...] Read more.
A novel tailless X-type flapping-wing micro air vehicle with two pairs of independent drive wings is designed and fabricated in this paper. Due to the complexity and unsteady of the flapping wing mechanism, the geometric and kinematic parameters of flapping wings significantly influence the aerodynamic characteristics of the bio-inspired flying robot. The wings of the vehicle are vector-controlled independently on both sides, enhancing the maneuverability and robustness of the system. Unique flight control strategy enables the aircraft to have multiple flight modes such as fast forward flight, sharp turn and hovering. The aerodynamics of the prototype is analyzed via the lattice Boltzmann method of computational fluid dynamics. The chordwise flexible deformation of the wing is implemented via designing a segmented rigid model. The clap-and-peel mechanism to improve the aerodynamic lift is revealed, and two air jets in one cycle are shown. Moreover, the dynamics experiment for the novel vehicle is implemented to investigate the kinematic parameters that affect the generation of thrust and maneuver moment via a 6-axis load cell. Optimized parameters of the flapping wing motion and structure are obtained to improve flight dynamics. Finally, the prototype realizes controllable take-off and flight from the ground. Full article
(This article belongs to the Section Locomotion and Bioinspired Robotics)
Show Figures

Figure 1

Figure 1
<p>Designed configuration and related details display of the prototype.</p>
Full article ">Figure 2
<p>Onboard electronic system structure.</p>
Full article ">Figure 3
<p>Bionic wings with different membrane materials and wing vein distribution.</p>
Full article ">Figure 4
<p>Double crank-rocker mechanism.</p>
Full article ">Figure 5
<p>Clap-and-peel mechanism during the flapping wing process. (<b>a</b>) Clap of the real butterfly; (<b>b</b>) Near clap; (<b>c</b>) Leading edges touch together; (<b>d</b>) Peel of the real butterfly; (<b>e</b>) Completely clap; (<b>f</b>) Initial peel; (<b>g</b>) End of peel of the real butterfly; (<b>h</b>) Trailing edges separate; (<b>i</b>) Completely peel.</p>
Full article ">Figure 6
<p>Tailless vector control schematic diagram of the prototype. (<b>a</b>) Altitude control; (<b>b</b>) Yaw control; (<b>c</b>) Pitch control (head up); (<b>d</b>) Pitch control (head down); (<b>e</b>) Roll control (clockwise); (<b>f</b>) Roll control (counterclockwise).</p>
Full article ">Figure 7
<p>PD attitude feedback control system for the X-type tailless FMAV.</p>
Full article ">Figure 8
<p>(<b>a</b>) Initial location diagram of various parts of prototype model in Xflow; (<b>b</b>) Model of X-type FMAV in Adams; (<b>c</b>) Visualization of the vortex structure of the flexible model in the Xflow flow field.</p>
Full article ">Figure 9
<p>Experiment setup for force, torque and power measurements.</p>
Full article ">Figure 10
<p>Vicon system setup for attitude angle (pitch <math display="inline"><semantics> <mi>θ</mi> </semantics></math>, roll <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math> and yaw <math display="inline"><semantics> <mi>ψ</mi> </semantics></math>), angular rates (<span class="html-italic">p</span>, <span class="html-italic">q</span>, <span class="html-italic">r</span>) and spatial position (<math display="inline"><semantics> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> </mrow> </semantics></math>) real-time measurements using a frame rate of 150 Hz.</p>
Full article ">Figure 11
<p>Simulation results of the wing flow field of the X-type FMAV: (<b>a</b>) at t = 0.015 s, the vorticity isosurface around the aircraft; (<b>b</b>) at t = 0.025 s, that is, when the wings clap together, vertical z-direction cutting plane, velocity vector field diagram at z = 0.1 m.</p>
Full article ">Figure 12
<p>Instantaneous aerodynamic forces in X-, Y- and Z-axis for 4 cycles of hovering FMAV.</p>
Full article ">Figure 13
<p>The cutting plane in the vertical z direction during the first flapping stroke of hovering flight, the frame-by-frame screenshot of vector velocity field at z = 0.1 m, and the flapping period is T = 0.05 s.</p>
Full article ">Figure 14
<p>Visualization results of isosurface based on vorticity from an oblique downward 45° viewing angle at (<b>a</b>) t = T/4, (<b>b</b>) T/2, (<b>c</b>) 3T/4, and (<b>d</b>) T, respectively.</p>
Full article ">Figure 15
<p>Simulation results of flow field at the end of the first cycle of hovering flight of the X-type FMAV: (<b>a</b>) the visualized image of front view vorticity isosurface; (<b>b</b>) the screenshot of velocity field at x = 0 m of cutting plane, perpendicul to the X-axis; (<b>c</b>) the screenshot of velocity field at z = 0.1 m of cutting plane, perpendicular to Z-axis.</p>
Full article ">Figure 16
<p>Generation of force and torques in three axes for a range of control inputs. (<b>a</b>) Yaw control; (<b>b</b>) Pitch control; (<b>c</b>) Roll control.</p>
Full article ">Figure 17
<p>Free flight test of the prototype. (<b>a</b>) Take-off test of prototype with rope constraints; (<b>b</b>) A 3D trajectory plot of the X-type FMAV.</p>
Full article ">
13 pages, 3354 KiB  
Article
Optimal DMD Koopman Data-Driven Control of a Worm Robot
by Mehran Rahmani and Sangram Redkar
Biomimetics 2024, 9(11), 666; https://doi.org/10.3390/biomimetics9110666 - 1 Nov 2024
Viewed by 639
Abstract
Bio-inspired robots are devices that mimic an animal’s motions and structures in nature. Worm robots are robots that are inspired by the movements of the worm in nature. This robot has different applications such as medicine and rescue plans. However, control of the [...] Read more.
Bio-inspired robots are devices that mimic an animal’s motions and structures in nature. Worm robots are robots that are inspired by the movements of the worm in nature. This robot has different applications such as medicine and rescue plans. However, control of the worm robot is a challenging task due to the high-nonlinearity dynamic model and external noises that are applied to that robot. This research uses an optimal data-driven controller to control the worm robot. First, data are obtained from the nonlinear model of the worm robot. Then, the Koopman theory is used to generate a linear dynamic model of the Worm robot. The dynamic mode decomposition (DMD) method is used to generate the Koopman operator. Finally, a linear quadratic regulator (LQR) control method is applied for the control of the worm robot. The simulation results verify the performance of the proposed control method. Full article
(This article belongs to the Special Issue Data-Driven Methods Applied to Robot Modeling and Control)
Show Figures

Figure 1

Figure 1
<p>Worm robot motion in nature [<a href="#B28-biomimetics-09-00666" class="html-bibr">28</a>].</p>
Full article ">Figure 2
<p>Worm robot mechanism.</p>
Full article ">Figure 3
<p>The proposed control method diagram.</p>
Full article ">Figure 4
<p>The position tracking of the worm robot joints under the proposed controller.</p>
Full article ">Figure 5
<p>The position tracking of the worm robot joints under the PID controller.</p>
Full article ">Figure 6
<p>The velocity of the worm robot joints under the proposed controller.</p>
Full article ">Figure 7
<p>Simulation of the worm robot locomotion.</p>
Full article ">
19 pages, 2646 KiB  
Article
Comparison of Empirical and Reinforcement Learning (RL)-Based Control Based on Proximal Policy Optimization (PPO) for Walking Assistance: Does AI Always Win?
by Nadine Drewing, Arjang Ahmadi, Xiaofeng Xiong and Maziar Ahmad Sharbafi
Biomimetics 2024, 9(11), 665; https://doi.org/10.3390/biomimetics9110665 - 1 Nov 2024
Viewed by 698
Abstract
The use of wearable assistive devices is growing in both industrial and medical fields. Combining human expertise and artificial intelligence (AI), e.g., in human-in-the-loop-optimization, is gaining popularity for adapting assistance to individuals. Amidst prevailing assertions that AI could surpass human capabilities in customizing [...] Read more.
The use of wearable assistive devices is growing in both industrial and medical fields. Combining human expertise and artificial intelligence (AI), e.g., in human-in-the-loop-optimization, is gaining popularity for adapting assistance to individuals. Amidst prevailing assertions that AI could surpass human capabilities in customizing every facet of support for human needs, our study serves as an initial step towards such claims within the context of human walking assistance. We investigated the efficacy of the Biarticular Thigh Exosuit, a device designed to aid human locomotion by mimicking the action of the hamstrings and rectus femoris muscles using Serial Elastic Actuators. Two control strategies were tested: an empirical controller based on human gait knowledge and empirical data and a control optimized using Reinforcement Learning (RL) on a neuromuscular model. The performance results of these controllers were assessed by comparing muscle activation in two assisted and two unassisted walking modes. Results showed that both controllers reduced hamstring muscle activation and improved the preferred walking speed, with the empirical controller also decreasing gastrocnemius muscle activity. However, the RL-based controller increased muscle activity in the vastus and rectus femoris, indicating that RL-based enhancements may not always improve assistance without solid empirical support. Full article
(This article belongs to the Special Issue Biologically Inspired Design and Control of Robots: Second Edition)
Show Figures

Figure 1

Figure 1
<p>Exosuit and models of human and exosuit: the exosuit worn during an experiment (<b>left</b>). The schematic of the exo structure and functionality (<b>middle</b>). The neuromuscular model in Scone for the AI training (<b>right</b>).</p>
Full article ">Figure 2
<p>Development process of desired force curve.</p>
Full article ">Figure 3
<p>Control structure for both the empirical and the RL-based control.</p>
Full article ">Figure 4
<p>Average excitation of the muscles hamstring (HAM), vastus (VAS), gastrocnemius (GAS), and soleus (SOL) during the simulation of walking with different conditions: No Exo, Empirical, and RL-based control.</p>
Full article ">Figure 5
<p>Measured Force with the force sensors of BATEX using the empirical- and RL-based control while walking at 0.8 m/s. The pre-tensioned force at rest position is subtracted.</p>
Full article ">Figure 6
<p>Average sEMG signal of four subjects measured during walking at 0.8 m/s with different conditions: No Exo, No Control, empirical and RL-based control. The vertical dashed lines show the take-off moment for different conditions.</p>
Full article ">Figure 7
<p>Change in the sEMG signal’s MAV and maximum values (peak) to measured sEMG signals without exosuit walking at 0.8 m/s. The asterix [*] denotes a <span class="html-italic">p</span>-value &lt; 0.05.</p>
Full article ">Figure 8
<p>Changes in the average preferred walking to running transition speed (PTS) and preferred walking speed (PWS) compared to those of the No Exo case.</p>
Full article ">Figure 9
<p>Average sEMG signal of four subjects measured during walking at preferred walking speed (PWS) with different conditions: No Exo, No Control, empirical and AI-based control. The vertical dashed lines show the take-off moment for different conditions.</p>
Full article ">Figure 10
<p>Change in % of the MAV of the sEMG signal and the peak value of the sEMG signal compared to measured sEMG signals without an exosuit while walking at a comfortable speed.</p>
Full article ">
15 pages, 3701 KiB  
Article
Compliant Grasp Control Method for the Underactuated Prosthetic Hand Based on the Estimation of Grasping Force and Muscle Stiffness with sEMG
by Xiaolei Xu, Hua Deng, Yi Zhang and Nianen Yi
Biomimetics 2024, 9(11), 658; https://doi.org/10.3390/biomimetics9110658 - 27 Oct 2024
Viewed by 668
Abstract
Human muscles can generate force and stiffness during contraction. When in contact with objects, human hands can achieve compliant grasping by adjusting the grasping force and the muscle stiffness based on the object’s characteristics. To realize humanoid-compliant grasping, most prosthetic hands obtain the [...] Read more.
Human muscles can generate force and stiffness during contraction. When in contact with objects, human hands can achieve compliant grasping by adjusting the grasping force and the muscle stiffness based on the object’s characteristics. To realize humanoid-compliant grasping, most prosthetic hands obtain the stiffness parameter of the compliant controller according to the environmental stiffness, which may be inconsistent with the amputee’s intention. To address this issue, this paper proposes a compliant grasp control method for an underactuated prosthetic hand that can directly obtain the control signals for compliant grasping from surface electromyography (sEMG) signals. First, an estimation method of the grasping force is established based on the Huxley muscle model. Then, muscle stiffness is estimated based on the muscle contraction principle. Subsequently, a relationship between the muscle stiffness of the human hand and the stiffness parameters of the prosthetic hand controller is established based on fuzzy logic to realize compliant grasp control for the underactuated prosthetic hand. Experimental results indicate that the prosthetic hand can adjust the desired force and stiffness parameters of the impedance controller based on sEMG, achieving a quick and stable grasp as well as a slow and gentle grasp on different objects. Full article
(This article belongs to the Special Issue Human-Inspired Grasp Control in Robotics)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the grasping control method for the myoelectric prosthetic hand.</p>
Full article ">Figure 2
<p>Collecting sEMG signals while grasping different objects with human hand.</p>
Full article ">Figure 3
<p>sEMG signal estimation results when grasping different objects: (<b>a</b>) grasping force, (<b>b</b>) muscle stiffness.</p>
Full article ">Figure 4
<p>Schematic diagram of the mechanical structure of the finger on the prosthetic hand.</p>
Full article ">Figure 5
<p>Schematic of the compliant control algorithm based on grasping force and muscle stiffness estimation.</p>
Full article ">Figure 6
<p>The fuzzy logic relationship between estimated muscle stiffness and controller stiffness.</p>
Full article ">Figure 7
<p>Schematic diagram of the experimental process of the grasping force and muscle stiffness estimation-based compliant grasp control of the underactuated prosthetic hand.</p>
Full article ">Figure 8
<p>Experiments of the compliant grasp control with single-layer paper cup (Object 1), double-layer paper cup (Object 2), milk carton (Object 3), and plastic cup (Object 4).</p>
Full article ">Figure 9
<p>Experimental results of sEMG signal estimation when grasping different objects: (<b>a</b>) estimated grasping force, (<b>b</b>) estimated muscle stiffness.</p>
Full article ">Figure 10
<p>Experimental results of the compliant grasp control experiment: (<b>a</b>) grasping force applied by the prosthetic hand, (<b>b</b>) rotation angle of rod 1, (<b>c</b>) motor current, (<b>d</b>) motor voltage.</p>
Full article ">
22 pages, 20719 KiB  
Article
A Computationally Efficient Neuronal Model for Collision Detection with Contrast Polarity-Specific Feed-Forward Inhibition
by Guangxuan Gao, Renyuan Liu, Mengying Wang and Qinbing Fu
Biomimetics 2024, 9(11), 650; https://doi.org/10.3390/biomimetics9110650 - 22 Oct 2024
Viewed by 676
Abstract
Animals utilize their well-evolved dynamic vision systems to perceive and evade collision threats. Driven by biological research, bio-inspired models based on lobula giant movement detectors (LGMDs) address certain gaps in constructing artificial collision-detecting vision systems with robust selectivity, offering reliable, low-cost, and miniaturized [...] Read more.
Animals utilize their well-evolved dynamic vision systems to perceive and evade collision threats. Driven by biological research, bio-inspired models based on lobula giant movement detectors (LGMDs) address certain gaps in constructing artificial collision-detecting vision systems with robust selectivity, offering reliable, low-cost, and miniaturized collision sensors across various scenes. Recent progress in neuroscience has revealed the energetic advantages of dendritic arrangements presynaptic to the LGMDs, which receive contrast polarity-specific signals on separate dendritic fields. Specifically, feed-forward inhibitory inputs arise from parallel ON/OFF pathways interacting with excitation. However, none of the previous research has investigated the evolution of a computational LGMD model with feed-forward inhibition (FFI) separated by opposite polarity. This study fills this vacancy by presenting an optimized neuronal model where FFI is divided into ON/OFF channels, each with distinct synaptic connections. To align with the energy efficiency of biological systems, we introduce an activation function associated with neural computation of FFI and interactions between local excitation and lateral inhibition within ON/OFF channels, ignoring non-active signal processing. This approach significantly improves the time efficiency of the LGMD model, focusing only on substantial luminance changes in image streams. The proposed neuronal model not only accelerates visual processing in relatively stationary scenes but also maintains robust selectivity to ON/OFF-contrast looming stimuli. Additionally, it can suppress translational motion to a moderate extent. Comparative testing with state-of-the-art based on ON/OFF channels was conducted systematically using a range of visual stimuli, including indoor structured and complex outdoor scenes. The results demonstrated significant time savings in silico while retaining original collision selectivity. Furthermore, the optimized model was implemented in the embedded vision system of a micro-mobile robot, achieving the highest success ratio of collision avoidance at 97.51% while nearly halving the processing time compared with previous models. This highlights a robust and parsimonious collision-sensing mode that effectively addresses real-world challenges. Full article
(This article belongs to the Special Issue Bio-Inspired and Biomimetic Intelligence in Robotics: 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Neuromorphology of LGMD1 and LGMD2: (<b>a</b>) the presynaptic neuropile layers of the LGMD neuron and its postsynaptic one-to-one target, the DCMD neuron, image courtesy of [<a href="#B26-biomimetics-09-00650" class="html-bibr">26</a>]; (<b>b</b>) a 3D reconstruction of the dendritic trees of LGMD1 and LGMD2, indicated by white and green arrows, respectively, adapted from [<a href="#B27-biomimetics-09-00650" class="html-bibr">27</a>]; (<b>c</b>) the schematics of the locust’s optic lobe, where the inset illustrates the three dendritic fields of the LGMD, image courtesy of [<a href="#B22-biomimetics-09-00650" class="html-bibr">22</a>].</p>
Full article ">Figure 2
<p>Illustrations of the optimized-LGMD neural network model: (<b>a</b>) the fundamental network structure of the model; (<b>b</b>) the detailed construction within each partial neural network (PNN); (<b>c</b>) the detail components.</p>
Full article ">Figure 3
<p>Illustration of <span class="html-italic">Colias</span> micro-mobile robot and the arena used in online experiments.</p>
Full article ">Figure 4
<p>Illustrations of approaching stimulation and results: the horizontal axis represents the frames of the input video, while the vertical axis shows the membrane potential output of different models. When the membrane potential exceeds 0.8, the downstream neuron fires a spike, indicated by a red dashed line. (<b>a</b>) visual stimuli, (<b>b</b>) LGMD1’s membrane potential and spikes, (<b>c</b>) oLGMD1’s membrane potential and spikes, (<b>d</b>) LGMD2’s membrane potential and spikes, (<b>e</b>) oLGMD2’s membrane potential and spikes. The four comparative models all respond strongly to the proximity of object.</p>
Full article ">Figure 5
<p>Illustrations of receding stimulation and results: LGMD2 and oLGMD2 models keep silent to dark receding, i.e., ON-contrast stimulation.</p>
Full article ">Figure 6
<p>Illustrations of translating stimulation and results: oLGMD1 and oLGMD2 are inhibited against translating.</p>
Full article ">Figure 7
<p>Illustrations of ball approaching test results of oLGMD model: both oLGMD1 and oLGMD2 respond strongly to the stimulation.</p>
Full article ">Figure 8
<p>Illustrations of ball receding test results of oLGMD model: oLGMD1 responds briefly and oLGMD2 is unresponsive.</p>
Full article ">Figure 9
<p>Illustrations of ball translating test results of oLGMD model: both oLGMD1 and oLGMD2 models well suppress excitation induced by translating.</p>
Full article ">Figure 10
<p>Results from the night crash tests demonstrate that both the oLGMD1 and oLGMD2 models can accurately predict imminent collisions under low-light conditions.</p>
Full article ">Figure 11
<p>Daylight car crash tests illustrate that both the oLGMD1 and oLGMD2 models can precisely predict imminent collisions in bright, daytime conditions.</p>
Full article ">Figure 12
<p>This figure presents data grouped into four sets corresponding to LGMD1, LGMD2, oLGMD1, and oLGMD2. Each is analyzed at two feature dimensions: (1) Box-and-whisker plot illustrates the distribution of the model’s required run times across all collision events of varying lengths. It highlights the individual characteristics of the run times, including the median, quartiles, and potential outliers. (2) Error bar graph displays the overall characteristics of the run times, with the error bars representing the range of variability across the experiments. The error bars indicate the uncertainty or variability in the model’s integrated processing times across all collision events. Both features represent data over a variety of experiments, providing a comprehensive view of the computational efficiency of each comparative model. Our proposed oLGMD model halves the processing time, showing a smaller variance.</p>
Full article ">Figure 13
<p>Illustrations of the responses of three collision selective models to dark approaching stimulation: the results indicate that both oLGMD1 and oLGMD2 generate collision warnings in response to the dark approaching stimulus, whereas reverse-oLGMD2 remains unresponsive.</p>
Full article ">Figure 14
<p>Illustrations of the responses of three collision selective models to light approaching stimulation: the results indicate that both oLGMD1 and reverse-oLGMD2 issue collision warnings in response to the light approaching stimulus, whereas oLGMD2 remains unresponsive. oLGMD2 and reverse-oLGMD2 models have the opposite looming selectivities.</p>
Full article ">Figure 15
<p>Illustrations of robot arena tests showing three processes with overtime trajectories: (<b>a</b>) oLGMD1 (white) vs. LGMD1 (red), (<b>b</b>) oLGMD2 (white) vs. LGMD2 (red), (<b>c</b>) oLGMD1 (white) vs. oLGMD2 (red). All <span class="html-italic">Colias</span> robots operated at a constant linear speed of approximately 0.04 m/s, with each process lasting 10 min.</p>
Full article ">Figure 16
<p>These density maps depict collision avoidance and crash events in the arena tests of the oLGMD model, with each model tested over a 30-min period. (<b>a</b>) oLGMD1 avoidance events, (<b>b</b>) oLGMD1 collision events, (<b>c</b>) oLGMD2 avoidance events, (<b>d</b>) oLGMD2 collision events.</p>
Full article ">Figure 17
<p>This figure compares the computational efficiency of LGMD1 versus oLGMD1 and LGMD2 versus oLGMD2 in response to 20 collision stimuli. In each scenario, the micro-robot processes 100 frames of raw images, with processing times averaged over these frames in milliseconds, as measured by a real-world clock system. The shaded area highlights the significant improvement in computational efficiency achieved by the oLGMD model compared with the LGMD model across these collision scenarios.</p>
Full article ">Figure 18
<p>This figure compares the average time costs of oLGMD1 versus LGMD1 and oLGMD2 versus LGMD2 in response to 20 collision stimuli. Each visual movement consists of 100 frames, and the curves represent the average time measured across 20 repeated tests. The shaded area highlights the substantial improvement in computational efficiency of the oLGMD models. The oLGMD model significantly reduces processing time outside the collision time window, achieved through the ON/OFF FFI pathways.</p>
Full article ">Figure 19
<p>This figure illustrates the robot’s performance with varying activation thresholds in the ON/OFF FFI pathways by examining the computing efficiency ratio (ER, Equation (<a href="#FD13-biomimetics-09-00650" class="html-disp-formula">13</a>)) and the success ratio (SR, Equation (<a href="#FD14-biomimetics-09-00650" class="html-disp-formula">14</a>)) of collision detection. The horizontal axis represents different activation thresholds. The left vertical axis shows the average operational ER of the proposed model when subjected to a collision scenario to stimulate the <span class="html-italic">Colias</span> vision system. The right vertical axis indicates the collision detection SR under the same conditions. The collision process was repeated 50 times to calculate the average ER and SR. As the activation threshold increases, the oLGMD model processes progressively less visual information, leading to higher computational efficiency but a lower collision recognition rate. The green-shaded area represents the range of activation threshold values that effectively balances these two factors.</p>
Full article ">
17 pages, 6583 KiB  
Article
A Pneumatic Soft Exoskeleton System Based on Segmented Composite Proprioceptive Bending Actuators for Hand Rehabilitation
by Kai Li, Daohui Zhang, Yaqi Chu, Xingang Zhao, Shuheng Ren and Xudong Hou
Biomimetics 2024, 9(10), 638; https://doi.org/10.3390/biomimetics9100638 - 18 Oct 2024
Viewed by 762
Abstract
Soft pneumatic actuators/robotics have received significant interest in the medical and health fields, due to their intrinsic elasticity and simple control strategies for enabling desired interactions. However, current soft hand pneumatic exoskeletons often exhibit uniform deformation, mismatch the profile of the interacting objects, [...] Read more.
Soft pneumatic actuators/robotics have received significant interest in the medical and health fields, due to their intrinsic elasticity and simple control strategies for enabling desired interactions. However, current soft hand pneumatic exoskeletons often exhibit uniform deformation, mismatch the profile of the interacting objects, and seldom quantify the assistive effects during activities of daily life (ADL), such as extension angle and predicted joint stiffness. The lack of quantification poses challenges to the effective and sustainable advancement of rehabilitation technology. This paper introduces the design, modeling, and testing of pneumatic bioinspired segmented composite proprioceptive bending actuators (SCPBAs) for hand rehabilitation in ADL tasks. Inspired by human finger anatomy, the actuator’s soft-joint–rigid-bone segmented structure provides a superior fit compared to continuous structures in traditional fiber-reinforced actuators (FRAs). A quasi-static model is established to predict the bending angles based on geometric parameters. Quantitative evaluations of predicted joint stiffness and extension angle utilizing proprioceptive bending are performed. Additionally, a soft under-actuated hand exoskeleton equipped with SCPBAs demonstrates their potential in ADL rehabilitation scenarios. Full article
(This article belongs to the Special Issue Optimal Design Approaches of Bioinspired Robots)
Show Figures

Figure 1

Figure 1
<p>Schematics of the structure and concept of the proposed SCPBAs. (<b>a</b>) Illustration of the structure of a human finger with one flexion DOF. (<b>b</b>) Illustration of the flexible-joint–rigid-bone anatomy of a human finger.</p>
Full article ">Figure 2
<p>Fabrication and assembly of the SCPBAs.</p>
Full article ">Figure 3
<p>Illustration of the proposed SCPBA bending deformation in free space.</p>
Full article ">Figure 4
<p>Diagram of bending deformation in constrained space considering finger joint stiffness. (<b>a</b>) Joints with intrinsic flexion torque; (<b>b</b>) Joints with involuntary flexion torque.</p>
Full article ">Figure 5
<p>The experimental setup of the SCPBAs.</p>
Full article ">Figure 6
<p>The relationship between voltage and corresponding angle of the FlexSensor attached to the SCPBAs.</p>
Full article ">Figure 7
<p>The relationship between the input pressure and output angle with various thicknesses of the stiffness-compensating layer in free space for MCP and PIP segments. The real experimental measurements are represented by all three trials’ error bars, and the dashed lines represent the modeling analytical results.</p>
Full article ">Figure 8
<p>Bending resistance (<span class="html-italic">BR</span>)/efficiency comparison of the different thicknesses of the stiffness-compensating layer.</p>
Full article ">Figure 9
<p>Stiffness estimation of the index finger MCP joint: (<b>a</b>) dummy finger coupled with torsional spring, (<b>b</b>) measured MCP joint angles and estimated stiffness.</p>
Full article ">Figure 10
<p>The three predefined ADL tasks.</p>
Full article ">Figure 11
<p>Experimental results of an able-bodied (AB) subject. (<b>a</b>) Bending angle results during the task-oriented training trial. (<b>b</b>) Applied pressure values for achieving targeted angles.</p>
Full article ">
19 pages, 5339 KiB  
Article
Stair-Climbing Wheeled Robot Based on Rotating Locomotion of Curved-Spoke Legs
by Dongwoo Seo and Jaeyoung Kang
Biomimetics 2024, 9(10), 633; https://doi.org/10.3390/biomimetics9100633 - 17 Oct 2024
Viewed by 795
Abstract
This study proposes a new wheel-leg mechanism concept and formulations for the kinematics and dynamics of a stair-climbing robot utilizing the rotating leg locomotion of curved spokes and rolling tires. The system consists of four motor-driven tires and four curved-spoke legs. The curved-spoke [...] Read more.
This study proposes a new wheel-leg mechanism concept and formulations for the kinematics and dynamics of a stair-climbing robot utilizing the rotating leg locomotion of curved spokes and rolling tires. The system consists of four motor-driven tires and four curved-spoke legs. The curved-spoke leg is semicircle-like and is used to climb stairs. Once the spoke leg rolls on the surface, it lifts and pulls the mating wheel toward the surface, owing to the kinematic constraint between the spoke and the wheel. Single-wheel climbing is a necessary condition for the stair climbing of whole robots equipped with front and rear axles. This study proposes the design requirements of a spoke leg for the success of single-wheel climbing in terms of kinematic inequality equations according to the scenario of single-wheel climbing. For a design configuration that enables single-wheel climbing, the required minimum friction coefficient for the static analysis of the stair-climbing wheeled robots is demon-strated. Thereafter, the stair-climbing ability is validated through the dynamic equations that enable the frictional slip of the tires, as well as the curved-spoke legs. Lastly, the results revealed that the rotating locomotion of the well-designed curved-spoke legs effectively enables the stair climbing of the whole robot. Full article
(This article belongs to the Special Issue Design and Control of a Bio-Inspired Robot: 3rd Edition)
Show Figures

Figure 1

Figure 1
<p>Robot with motor-driven tires and rotating curved-spoke legs; (<b>a</b>) configuration of the system; (<b>b</b>) free body diagram.</p>
Full article ">Figure 2
<p>Contact penalty method for tire A.</p>
Full article ">Figure 3
<p>Contact penalty method for the contact forces of the curved-spoke leg; <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="double-struck">F</mi> <mn>0</mn> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="double-struck">F</mi> <mn>1</mn> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="double-struck">F</mi> <mn>2</mn> </msub> </mrow> </semantics></math> denote coordinate system xyz (global), x′y′z′ (robot body-fixed), and x″y″z″ (spoke-fixed), respectively.</p>
Full article ">Figure 4
<p>Surface profile function for an arbitrary stair geometry.</p>
Full article ">Figure 5
<p>Scenario of single-wheel climbing locomotion; (<b>a</b>) phase 1: the tire rolls on the lower surface up to the wall of the stair and the spoke rotates and touches down on the higher surface; (<b>b</b>) phase 2: the wheel is elevated from the lower to the higher surface owing to the pulling of the spoke leg during spoke rolling; (<b>c</b>) phase 3: both the wheel and spoke leg roll on the higher surface.</p>
Full article ">Figure 6
<p>Configuration of the curved-spoke leg relative to the wheel, (<b>a</b>) the joint position J <math display="inline"><semantics> <mrow> <mo stretchy="false">(</mo> <msubsup> <mi>J</mi> <mi>x</mi> <mo>′</mo> </msubsup> <mo>,</mo> <msubsup> <mi>J</mi> <mi>y</mi> <mo>′</mo> </msubsup> <mo stretchy="false">)</mo> </mrow> </semantics></math> and rotation angle β′ with respect to the body frame x’-y’-z’, and (<b>b</b>) the determination of joint position J for the success of the single-wheel climbing when the tire was in contact with the wall at <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi>W</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Configuration of the kinematic inequality condition for wheel climbing to vertex Q along with the trajectory of translational motion; the spoke rotates with <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mrow> <mi>r</mi> <mi>o</mi> <mi>t</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>β</mi> <mi>U</mi> </msub> <mo>−</mo> <mi>β</mi> </mrow> </semantics></math> while maintaining the constrained length l and pulling the wheel.</p>
Full article ">Figure 8
<p>Static force diagram during stair climbing if the wheelbase L is approximated to the hypotenuse of several steps; <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>≈</mo> <mn>2</mn> <msqrt> <mrow> <msup> <mi>H</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>W</mi> <mn>2</mn> </msup> </mrow> </msqrt> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>The regime of joint positions of the spoke leg relative to the wheel satisfying the kinematic inequality conditions at (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>φ</mi> <mo>=</mo> <msup> <mn>0</mn> <mo>∘</mo> </msup> </mrow> </semantics></math> and (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>φ</mi> <mo>=</mo> <msup> <mrow> <mi>tan</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> <mrow> <mi>H</mi> <mo>/</mo> <mi>W</mi> </mrow> <mo>=</mo> <msup> <mrow> <mn>31</mn> </mrow> <mo>∘</mo> </msup> </mrow> </semantics></math>, and the (<b>c</b>) regime satisfying the inequality conditions for both pitch angles.</p>
Full article ">Figure 10
<p>Kinematic sequential motion of wheel-climbing for <math display="inline"><semantics> <mrow> <mfenced> <mrow> <msubsup> <mi>J</mi> <mi>x</mi> <mo>′</mo> </msubsup> <mo>,</mo> <msubsup> <mi>J</mi> <mi>y</mi> <mo>′</mo> </msubsup> </mrow> </mfenced> <mo>=</mo> <mfenced> <mrow> <mn>0.1</mn> <mo>,</mo> <mn>0.1</mn> </mrow> </mfenced> </mrow> </semantics></math> at (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>φ</mi> <mo>=</mo> <msup> <mn>0</mn> <mo>∘</mo> </msup> </mrow> </semantics></math> (horizontal body) and (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>φ</mi> <mo>=</mo> <msup> <mrow> <mi>tan</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> <mrow> <mi>H</mi> <mo>/</mo> <mi>W</mi> </mrow> <mo>=</mo> <msup> <mrow> <mn>31</mn> </mrow> <mo>∘</mo> </msup> </mrow> </semantics></math> (slant body).</p>
Full article ">Figure 11
<p>Locomotion of the robot during stair climbing for <math display="inline"><semantics> <mrow> <mfenced> <mrow> <msubsup> <mi>J</mi> <mi>x</mi> <mo>′</mo> </msubsup> <mo>,</mo> <msubsup> <mi>J</mi> <mi>y</mi> <mo>′</mo> </msubsup> </mrow> </mfenced> <mo>=</mo> <mfenced> <mrow> <mn>0.1</mn> <mo>,</mo> <mn>0.1</mn> </mrow> </mfenced> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>2</mn> <msqrt> <mrow> <msup> <mi>H</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>W</mi> <mn>2</mn> </msup> </mrow> </msqrt> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mo>=</mo> <mn>0.58</mn> </mrow> </semantics></math>: (<b>a</b>) phase 1, (<b>b</b>) initial phase 2, (<b>c</b>) mid-phase 2, and (<b>d</b>) final phase 2.</p>
Full article ">Figure 12
<p>Stair-climbing response of the dynamic model for <math display="inline"><semantics> <mrow> <mfenced> <mrow> <msubsup> <mi>J</mi> <mi>x</mi> <mo>′</mo> </msubsup> <mo>,</mo> <msubsup> <mi>J</mi> <mi>y</mi> <mo>′</mo> </msubsup> </mrow> </mfenced> <mo>=</mo> <mfenced> <mrow> <mn>0.1</mn> <mo>,</mo> <mn>0.1</mn> </mrow> </mfenced> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>2</mn> <msqrt> <mrow> <msup> <mi>H</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>W</mi> <mn>2</mn> </msup> </mrow> </msqrt> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mo>=</mo> <mn>0.58</mn> </mrow> </semantics></math>: (<b>a</b>) the CG position and (<b>b</b>) velocity; wheelbase L is assumed to be adjusted to the length of the hypotenuse of 2 steps.</p>
Full article ">Figure 13
<p>Input values of the front and rear curved-spoke legs for stair climbing at <math display="inline"><semantics> <mrow> <mfenced> <mrow> <msubsup> <mi>J</mi> <mi>x</mi> <mo>′</mo> </msubsup> <mo>,</mo> <msubsup> <mi>J</mi> <mi>y</mi> <mo>′</mo> </msubsup> </mrow> </mfenced> <mo>=</mo> <mfenced> <mrow> <mn>0.1</mn> <mo>,</mo> <mn>0.1</mn> </mrow> </mfenced> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>0.58</mn> <mo>=</mo> <mn>2</mn> <msqrt> <mrow> <msup> <mi>H</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>W</mi> <mn>2</mn> </msup> </mrow> </msqrt> </mrow> </semantics></math>: (<b>a</b>) the spoke angle, (<b>b</b>) spoke speed, and (<b>c</b>) spoke torque.</p>
Full article ">
Back to TopTop