Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (122)

Search Parameters:
Keywords = mobile service robot

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 6903 KiB  
Communication
Development of Dual-Arm Human Companion Robots That Can Dance
by Joonyoung Kim, Taewoong Kang, Dongwoon Song, Gijae Ahn and Seung-Joon Yi
Sensors 2024, 24(20), 6704; https://doi.org/10.3390/s24206704 - 18 Oct 2024
Viewed by 425
Abstract
As gestures play an important role in human communication, there have been a number of service robots equipped with a pair of human-like arms for gesture-based human–robot interactions. However, the arms of most human companion robots are limited to slow and simple gestures [...] Read more.
As gestures play an important role in human communication, there have been a number of service robots equipped with a pair of human-like arms for gesture-based human–robot interactions. However, the arms of most human companion robots are limited to slow and simple gestures due to the low maximum velocity of the arm actuators. In this work, we present the JF-2 robot, a mobile home service robot equipped with a pair of torque-controlled anthropomorphic arms. Thanks to the low inertia design of the arm, responsive Quasi-Direct Drive (QDD) actuators, and active compliant control of the joints, the robot can replicate fast human dance motions while being safe in the environment. In addition to the JF-2 robot, we also present the JF-mini robot, a scaled-down, low-cost version of the JF-2 robot mainly targeted for commercial use at kindergarten and childcare facilities. The suggested system is validated by performing three experiments, a safety test, teaching children how to dance along to the music, and bringing a requested item to a human subject. Full article
(This article belongs to the Special Issue Intelligent Social Robotic Systems)
Show Figures

Figure 1

Figure 1
<p>The JF-2 (left) and JF-mini (right) human companion robots without casing.</p>
Full article ">Figure 2
<p>Hardware configurations and dimensions of the JF-2 and JF-mini robots.</p>
Full article ">Figure 3
<p>Four-DOF arm mechanisms of the JF-2 and JF-mini robots.</p>
Full article ">Figure 4
<p>Use cases of the chest display. (<b>a</b>) Robot control. (<b>b</b>) Synchronously showing an animation while dancing.</p>
Full article ">Figure 5
<p>Various facial expressions with the circular LCD head.</p>
Full article ">Figure 6
<p>Software architecture of the JF-2 and JF-mini robots.</p>
Full article ">Figure 7
<p>Three different human motion capturing methods. (<b>a</b>) Motion capture-based. (<b>b</b>) VR tracker-based. (<b>c</b>) Keypoint detection-based.</p>
Full article ">Figure 8
<p>Human motion-replicating process. (<b>a</b>) Shoulder, elbow, and hand 3D positions assuming zero chest tilt and roll angles. (<b>b</b>) Retargeted joint angles in a simulated environment. (<b>c</b>) Arm postures realized with JF-2 robot.</p>
Full article ">Figure 9
<p>(<b>a</b>) Snapshots of the safety test. (<b>b</b>) Time-force results.</p>
Full article ">Figure 10
<p>JF-2 and JF-mini robots dancing along to four different children’s songs.</p>
Full article ">Figure 10 Cont.
<p>JF-2 and JF-mini robots dancing along to four different children’s songs.</p>
Full article ">Figure 11
<p>JF-2 robot executing the gift delivery task.</p>
Full article ">Figure 12
<p>Examples of dance motion by JF2.</p>
Full article ">
21 pages, 80623 KiB  
Article
Research on Path Planning for Intelligent Mobile Robots Based on Improved A* Algorithm
by Dexian Wang, Qilong Liu, Jinghui Yang and Delin Huang
Symmetry 2024, 16(10), 1311; https://doi.org/10.3390/sym16101311 - 4 Oct 2024
Viewed by 1291
Abstract
Intelligent mobile robots have been gradually used in various fields, including logistics, healthcare, service, and maintenance. Path planning is a crucial aspect of intelligent mobile robot research, which aims to empower robots to create optimal trajectories within complex and dynamic environments autonomously. This [...] Read more.
Intelligent mobile robots have been gradually used in various fields, including logistics, healthcare, service, and maintenance. Path planning is a crucial aspect of intelligent mobile robot research, which aims to empower robots to create optimal trajectories within complex and dynamic environments autonomously. This study introduces an improved A* algorithm to address the challenges faced by the preliminary A* pathfinding algorithm, which include limited efficiency, inadequate robustness, and excessive node traversal. Firstly, the node storage structure is optimized using a minimum heap to decrease node traversal time. In addition, the heuristic function is improved by adding an adaptive weight function and a turn penalty function. The original 8-neighbor is expanded to a 16-neighbor within the search strategy, followed by the elimination of invalid search neighbor to refine it into a new 8-neighbor according to the principle of symmetry, thereby enhancing the directionality of the A* algorithm and improving search efficiency. Furthermore, a bidirectional search mechanism is implemented to further reduce search time. Finally, trajectory optimization is performed on the planned paths using path node elimination and cubic Bezier curves, which aligns the optimized paths more closely with the kinematic constraints of the robot derivable trajectories. In simulation experiments on grid maps of different sizes, it was demonstrated that the proposed improved A* algorithm outperforms the preliminary A* Algorithm in various metrics, such as search efficiency, node traversal count, path length, and inflection points. The improved algorithm provides substantial value for practical applications by efficiently planning optimal paths in complex environments and ensuring robot drivability. Full article
(This article belongs to the Special Issue Symmetry in Evolutionary Computation and Reinforcement Learning)
Show Figures

Figure 1

Figure 1
<p>9 × 9 square grid map.</p>
Full article ">Figure 2
<p>Grid map after building linear indexes.</p>
Full article ">Figure 3
<p>Grid maps with irregular obstacle shape.</p>
Full article ">Figure 4
<p>Regularized and expanded grid map.</p>
Full article ">Figure 5
<p>Planning paths through square grid vertices. (<b>a</b>) Scenario 1. (<b>b</b>) Scenario 2.</p>
Full article ">Figure 6
<p>Three common distance algorithms.</p>
Full article ">Figure 7
<p>Simulation results for three distance equations. (<b>a</b>) Manhattan distance. (<b>b</b>) Euclidean distance. (<b>c</b>) Diagonal distance.</p>
Full article ">Figure 8
<p>Search neighbor. (<b>a</b>) 4-Neighbor 4-Direction. (<b>b</b>) 8-Neighbor 8-Direction. (<b>c</b>) 16-Neighbor 16-Direction.</p>
Full article ">Figure 9
<p>16-Neighbor 8-Direction schematic.</p>
Full article ">Figure 10
<p>Bi-directional A* algorithm flowchart.</p>
Full article ">Figure 11
<p>Path optimization result.</p>
Full article ">Figure 12
<p>Neighbor optimization simulation results. (<b>a</b>) Traditional A* algorithm. (<b>b</b>) Neighbor optimized A* algorithm.</p>
Full article ">Figure 13
<p>Heap optimization simulation results. (<b>a</b>) Traditional A* algorithm. (<b>b</b>) Heap optimized A* algorithm.</p>
Full article ">Figure 13 Cont.
<p>Heap optimization simulation results. (<b>a</b>) Traditional A* algorithm. (<b>b</b>) Heap optimized A* algorithm.</p>
Full article ">Figure 14
<p>Bidirectional search simulation results. (<b>a</b>) Traditional A* algorithm. (<b>b</b>) Bidirectional A* algorithm.</p>
Full article ">Figure 14 Cont.
<p>Bidirectional search simulation results. (<b>a</b>) Traditional A* algorithm. (<b>b</b>) Bidirectional A* algorithm.</p>
Full article ">Figure 15
<p>Evaluation function optimization simulation results. (<b>a</b>) Traditional A* algorithm. (<b>b</b>) Evaluation function optimized A* algorithm.</p>
Full article ">Figure 16
<p>Improved A* Algorithm simulation results. (<b>a</b>) Traditional A* algorithm. (<b>b</b>) Improved A* algorithm. (<b>c</b>) The improved A* algorithm proposed in the literature [<a href="#B23-symmetry-16-01311" class="html-bibr">23</a>].</p>
Full article ">
17 pages, 4604 KiB  
Article
The Influence of Energy Consumption and the Environmental Impact of Electronic Components on the Structures of Mobile Robots Used in Logistics
by Constantin-Adrian Popescu, Severus-Constantin Olteanu, Ana-Maria Ifrim, Catalin Petcu, Catalin Ionut Silvestru and Daniela-Mariana Ilie
Sustainability 2024, 16(19), 8396; https://doi.org/10.3390/su16198396 - 26 Sep 2024
Viewed by 985
Abstract
Industrial development has implicitly led to the development of new systems that increase the ability to provide services and products in real time. Autonomous mobile robots are considered some of the most important tools that can help both industry and society. These robots [...] Read more.
Industrial development has implicitly led to the development of new systems that increase the ability to provide services and products in real time. Autonomous mobile robots are considered some of the most important tools that can help both industry and society. These robots offer a certain autonomy that makes them indispensable in industrial activities. However, some elements of these robots are not yet very well outlined, such as their construction, their lifetime and energy consumption, and the environmental impact of their activity. Within the context of European regulations (here, we focus on the Green Deal and the growth in greenhouse gas emissions), any industrial activity must be analyzed and optimized so that it is efficient and does not significantly impact the environment. The added value of this paper is its examination of the activities carried out by mobile robots and the impact of their electronic components on the environment. The proposed analysis employs, as a central point, an analysis of mobile robots from the point of view of their electronic components and the impact of their activity on the environment in terms of energy consumption, as evaluated by calculating the emission of greenhouse gases (GHGs). The way in which the activity of a robot impacts the environment was established throughout the economic flow, as well as by providing possible methods of reducing this impact by optimizing the robot’s activity. The environmental impact of a mobile robot, in regard to its electronic components, will also be analyzed when the period of operation is completed. Full article
(This article belongs to the Special Issue Sustainability and Innovation in SMEs)
Show Figures

Figure 1

Figure 1
<p>Block diagram with the main components of the designed mobile robot.</p>
Full article ">Figure 2
<p>Robot-specific mechanical infrastructure and drive systems.</p>
Full article ">Figure 3
<p>Electrical architecture specific to the command and control system of the designed mobile robot structure.</p>
Full article ">Figure 4
<p>Electrical architecture specific to the sensor system in the designed mobile robot structure.</p>
Full article ">Figure 5
<p>The application in which the designed AMR is integrated—initial route (unoptimized).</p>
Full article ">Figure 6
<p>The application in which the designed AMR is integrated—with adapted route after reconfiguring the storage area (optimized).</p>
Full article ">
14 pages, 6544 KiB  
Article
Evaluating the Robot Inclusivity of Buildings Based on Surface Unevenness
by Charan Satya Chandra Sairam Borusu, Matthew S. K. Yeo, Zimou Zeng, M. A. Viraj J. Muthugala, Michael Budig, Mohan Rajesh Elara and Yixiao Wang
Appl. Sci. 2024, 14(17), 7831; https://doi.org/10.3390/app14177831 - 4 Sep 2024
Viewed by 492
Abstract
Mobile service robots experience excessive vibrations when travelling over uneven surfaces in their workspace, increasing the degradation rate of the mechanical components or disrupting the robot’s sensing abilities for proper localization and navigational capabilities. Robot inclusivity principles can determine the suitability of a [...] Read more.
Mobile service robots experience excessive vibrations when travelling over uneven surfaces in their workspace, increasing the degradation rate of the mechanical components or disrupting the robot’s sensing abilities for proper localization and navigational capabilities. Robot inclusivity principles can determine the suitability of a site for robot performance by considering the ground’s unevenness. This paper proposes a novel framework to autonomously evaluate the Robot Inclusivity Level of buildings based on surface unevenness (RIL-SU) by quantifying the surface unevenness of floor surfaces. The surface unevenness values are converted to RIL-SU using a rule-based approach, and the corresponding RIL-SU is tagged to the map location. A coloured heatmap based on the RIL-SU values is created as a visual representation of the RIL-SU of a given space. This heatmap would be useful for modifying the environment to make it more robot-friendly or restrict the robot’s operation in certain areas to avoid possible robot failures. The experimental results show that the proposed framework can successfully generate a valid RIL-SU heatmap for building environments. Full article
(This article belongs to the Section Robotics and Automation)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Localization drift caused by an uneven surface and the robot following an incorrect path; (<b>b</b>) Map boundary distortion caused by an uneven surface.</p>
Full article ">Figure 2
<p>Meerkat audit robot.</p>
Full article ">Figure 3
<p>System architecture.</p>
Full article ">Figure 4
<p>(<b>a</b>) A 2D LiDAR map of the study area and path taken by the audit robot; (<b>b</b>) recorded unevenness variation as a heatmap; (<b>c</b>) resultant RIL-SU heatmap for the site.</p>
Full article ">Figure 5
<p>Robot navigation performance at different zones in the study area during the validation phase. (<b>a</b>) Location 1, (<b>b</b>) Location 2, and (<b>c</b>) Location 3.</p>
Full article ">Figure 6
<p>(<b>a</b>) A 2D LiDAR map of the printing room and the path taken by the audit robot; (<b>b</b>) surface map of the printing room; (<b>c</b>) RII-SU colour map of the printing room.</p>
Full article ">Figure 7
<p>Robot navigation performance in different zones in the printing room during the validation phase. (<b>a</b>) Location 4, (<b>b</b>) Location 5, and (<b>c</b>) Location 6.</p>
Full article ">Figure 8
<p>(<b>a</b>) A 2D LiDAR map of the connector bridge area and the path taken by the audit robot; (<b>b</b>) surface map of connector bridge area; (<b>c</b>) RII-SU colour map of connector bridge area.</p>
Full article ">Figure 9
<p>Robot navigation performance in different zones in the connector bridge area during the validation phase. (<b>a</b>) Location 7, (<b>b</b>) Location 8, and (<b>c</b>) Location 9.</p>
Full article ">
17 pages, 3554 KiB  
Article
Robot Operating Systems–You Only Look Once Version 5–Fleet Efficient Multi-Scale Attention: An Improved You Only Look Once Version 5-Lite Object Detection Algorithm Based on Efficient Multi-Scale Attention and Bounding Box Regression Combined with Robot Operating Systems
by Haiyan Wang, Zhan Shi, Guiyuan Gao, Chuang Li, Jian Zhao and Zhiwei Xu
Appl. Sci. 2024, 14(17), 7591; https://doi.org/10.3390/app14177591 - 28 Aug 2024
Viewed by 646
Abstract
This paper primarily investigates enhanced object detection techniques for indoor service mobile robots. Robot operating systems (ROS) supply rich sensor data, which boost the models’ ability to generalize. However, the model’s performance might be hindered by constraints in the processing power, memory capacity, [...] Read more.
This paper primarily investigates enhanced object detection techniques for indoor service mobile robots. Robot operating systems (ROS) supply rich sensor data, which boost the models’ ability to generalize. However, the model’s performance might be hindered by constraints in the processing power, memory capacity, and communication capabilities of robotic devices. To address these issues, this paper proposes an improved you only look once version 5 (YOLOv5)-Lite object detection algorithm based on efficient multi-scale attention and bounding box regression combined with ROS. The algorithm incorporates efficient multi-scale attention (EMA) into the traditional YOLOv5-Lite model and replaces the C3 module with a lightweight C3Ghost module to reduce computation and model size during the convolution process. To enhance bounding box localization accuracy, modified precision-defined intersection over union (MPDIoU) is employed to optimize the model, resulting in the ROS–YOLOv5–FleetEMA model. The results indicated that relative to the conventional YOLOv5-Lite model, the ROS–YOLOv5–FleetEMA model enhanced the mean average precision (mAP) by 2.7% post-training, reduced giga floating-point operations per second (GFLOPS) by 13.2%, and decreased the params by 15.1%. In light of these experimental findings, the model was incorporated into ROS, leading to the development of a ROS-based object detection platform that offers rapid and precise object detection capabilities. Full article
(This article belongs to the Special Issue Object Detection and Image Classification)
Show Figures

Figure 1

Figure 1
<p>YOLOv5-Lite network structure.</p>
Full article ">Figure 2
<p>Basic units of ShuffleNet V2. (<b>a</b>) Deep stacking module Stage 1; (<b>b</b>) deep stacking module Stage 2.</p>
Full article ">Figure 3
<p>Efficient multi-scale attention.</p>
Full article ">Figure 4
<p>Traditional convolution and GhostNet convolution processes.</p>
Full article ">Figure 5
<p>Ghost module.</p>
Full article ">Figure 6
<p>Hardware structure of Ackerman differential car.</p>
Full article ">Figure 7
<p>Workflow of object detection function.</p>
Full article ">Figure 8
<p>Ablation experimental results.</p>
Full article ">Figure 9
<p>ROS-based object detection platform.</p>
Full article ">
14 pages, 3833 KiB  
Article
Real-Time Indoor Visible Light Positioning (VLP) Using Long Short Term Memory Neural Network (LSTM-NN) with Principal Component Analysis (PCA)
by Yueh-Han Shu, Yun-Han Chang, Yuan-Zeng Lin and Chi-Wai Chow
Sensors 2024, 24(16), 5424; https://doi.org/10.3390/s24165424 - 22 Aug 2024
Cited by 1 | Viewed by 672
Abstract
New applications such as augmented reality/virtual reality (AR/VR), Internet-of-Things (IOT), autonomous mobile robot (AMR) services, etc., require high reliability and high accuracy real-time positioning and tracking of persons and devices in indoor areas. Among the different visible-light-positioning (VLP) schemes, such as proximity, time-of-arrival [...] Read more.
New applications such as augmented reality/virtual reality (AR/VR), Internet-of-Things (IOT), autonomous mobile robot (AMR) services, etc., require high reliability and high accuracy real-time positioning and tracking of persons and devices in indoor areas. Among the different visible-light-positioning (VLP) schemes, such as proximity, time-of-arrival (TOA), time-difference-of-arrival (TDOA), angle-of-arrival (AOA), and received-signal-strength (RSS), the RSS scheme is relatively easy to implement. Among these VLP methods, the RSS method is simple and efficient. As the received optical power has an inverse relationship with the distance between the LED transmitter (Tx) and the photodiode (PD) receiver (Rx), position information can be estimated by studying the received optical power from different Txs. In this work, we propose and experimentally demonstrate a real-time VLP system utilizing long short-term memory neural network (LSTM-NN) with principal component analysis (PCA) to mitigate high positioning error, particularly at the positioning unit cell boundaries. Experimental results show that in a positioning unit cell of 100 × 100 × 250 cm3, the average positioning error is 5.912 cm when using LSTM-NN only. By utilizing the PCA, we can observe that the positioning accuracy can be significantly enhanced to 1.806 cm, particularly at the unit cell boundaries and cell corners, showing a positioning error reduction of 69.45%. In the cumulative distribution function (CDF) measurements, when using only the LSTM-NN model, the positioning error of 95% of the experimental data is >15 cm; while using the LSTM-NN with PCA model, the error is reduced to <5 cm. In addition, we also experimentally demonstrate that the proposed real-time VLP system can also be used to predict the direction and the trajectory of the moving Rx. Full article
(This article belongs to the Special Issue Challenges and Future Trends in Optical Communications)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Architecture of the VLP system with four LEDs modulated by specific RF carrier frequencies of <span class="html-italic">f</span><sub>1</sub>, <span class="html-italic">f</span><sub>2</sub>, <span class="html-italic">f</span><sub>3</sub>, and <span class="html-italic">f</span><sub>4</sub>, (47 kHz, 59 kHz, 83 kHz, 101 kHz), respectively. (<b>b</b>) Bird-view of the positioning unit cell indicating the training and testing locations.</p>
Full article ">Figure 2
<p>(<b>a</b>) Experimental photo of the VLP experiment. (<b>b</b>) Photo of the client side. The PD, RTO, and PC are all placed on a trolley for training and testing data collections. PD: photodiode; RTO: real-time oscilloscope.</p>
Full article ">Figure 3
<p>Architecture of the VLP Rx. ID: optical identifier; BPF: band-pass filter; LPF: low-pass filter.</p>
Full article ">Figure 4
<p>Flow diagram of the proposed real-time VLP system utilizing LSTM-NN with PCA.</p>
Full article ">Figure 5
<p>Flow diagram of the PCA used in the VLP experiment.</p>
Full article ">Figure 6
<p>Structure of an LSTM cell used in the LSTM-NN model.</p>
Full article ">Figure 7
<p>Structure of the proposed LSTM-NN model used in both training phase and testing phase.</p>
Full article ">Figure 8
<p>Error distributions using (<b>a</b>) the LSTM-NN only and (<b>b</b>) the LSTM-NN with PCA.</p>
Full article ">Figure 9
<p>CDF of the measured positioning error using LSTM-NN only and using the LSTM-NN with PCA.</p>
Full article ">Figure 10
<p>Error distributions using (<b>a</b>) FCN only and (<b>b</b>) FCN with PCA.</p>
Full article ">Figure 11
<p>CDF of the measured positioning error using FCN only and using FCN with PCA.</p>
Full article ">Figure 12
<p>Experimental predicted location of the moving Rx using the LSTM-NN with PCA at different iterations. (<b>a</b>–<b>h</b>) Indication of predicted direction and trajectory of the Rx from iteration 1 to 7.</p>
Full article ">
24 pages, 22734 KiB  
Article
Optimizing Orchard Planting Efficiency with a GIS-Integrated Autonomous Soil-Drilling Robot
by Osman Eceoğlu and İlker Ünal
AgriEngineering 2024, 6(3), 2870-2890; https://doi.org/10.3390/agriengineering6030166 - 13 Aug 2024
Viewed by 722
Abstract
A typical orchard’s mechanical operation consists of three or four stages: lining and digging for plantation, moving the seedling from nurseries to the farm, moving the seedling to the planting hole, and planting the seedling in the hole. However, the digging of the [...] Read more.
A typical orchard’s mechanical operation consists of three or four stages: lining and digging for plantation, moving the seedling from nurseries to the farm, moving the seedling to the planting hole, and planting the seedling in the hole. However, the digging of the planting hole is the most time-consuming operation. In fruit orchards, the use of robots is increasingly becoming more prevalent to increase operational efficiency. They offer practical and effective services to both industry and people, whether they are assigned to plant trees, reduce the use of chemical fertilizers, or carry heavy loads to relieve staff. Robots can operate for extended periods of time and can be highly adept at repetitive tasks like planting many trees. The present study aims to identify the locations for planting trees in orchards using geographic information systems (GISs), to develop an autonomous drilling machine and use the developed robot to open planting holes. There is no comparable study on autonomous hole planting in the literature in this regard. The agricultural mobile robot is a four=wheeled nonholonomic robot with differential steering and forwarding capability to stable target positions. The designed mobile robot can be used in fully autonomous, partially autonomous, or fully manual modes. The drilling system, which is a y-axis shifter driven by a DC motor with a reducer includes an auger with a 2.1 HP gasoline engine. SOLIDWORKS 2020 software was used for designing and drawing the mobile robot and drilling system. The Microsoft Visual Basic.NET programming language was used to create the robot navigation system and drilling mechanism software. The cross-track error (XTE), which determines the distances between the actual and desired holes positions, was utilized to analyze the steering accuracy of the mobile robot to the drilling spots. Consequently, the average of the arithmetic means was determined to be 4.35 cm, and the standard deviation was 1.73 cm. This figure indicates that the suggested system is effective for drilling plant holes in orchards. Full article
Show Figures

Figure 1

Figure 1
<p>This study’s conceptual structure for marking drilling points from satellite images and drilling with a mobile robot.</p>
Full article ">Figure 2
<p>Equipment used on GIS-based autonomous soil-drilling robot and photo of the developed robot.</p>
Full article ">Figure 3
<p>Top, front, and side views of the technical drawing of the mobile robot.</p>
Full article ">Figure 4
<p>The quadrant control mechanism’s flowchart [<a href="#B26-agriengineering-06-00166" class="html-bibr">26</a>].</p>
Full article ">Figure 5
<p>The flowchart for the mobile robot navigation method.</p>
Full article ">Figure 6
<p>Top, front, and side views of the technical drawings for the mobile robot attached to the soil auger machine.</p>
Full article ">Figure 7
<p>Full-scale technical drawing of the soil auger machine and H-shaped grid.</p>
Full article ">Figure 8
<p>The kinematics schematic of the differential drive mobile robot.</p>
Full article ">Figure 9
<p>ArcMap program in order to locate the seedling holes.</p>
Full article ">Figure 10
<p>Developed navigation and drilling software: (<b>a</b>) the software that was designed for mobile robot navigation; (<b>b</b>) soil-drilling procedures.</p>
Full article ">Figure 11
<p>The target locations and the locations where the mobile robot drills a hole.</p>
Full article ">Figure 12
<p>The mobile robot’s images in the study field for hole-drilling operation.</p>
Full article ">Figure 13
<p>The images of the drilled holes.</p>
Full article ">Figure 14
<p>Histogram of the XTE values of the mobile robot. The x axis of the histogram represents the range of XTE values of the mobile robot. The y axis represents the frequency or count of XTE values.</p>
Full article ">Figure 15
<p>Normal QQ plot of XTE values. This Q–Q plot compares the XTE values on the vertical axis to a statistical population on the horizontal axis.</p>
Full article ">Figure 16
<p>The spatial distribution of the XTE. This spatial distribution map represents the correlation between XTE values and the all locations of the study field.</p>
Full article ">
22 pages, 3272 KiB  
Article
Stochastic Multi-Objective Multi-Trip AMR Routing Problem with Time Windows
by Lulu Cheng, Ning Zhao and Kan Wu
Mathematics 2024, 12(15), 2394; https://doi.org/10.3390/math12152394 - 31 Jul 2024
Viewed by 721
Abstract
In recent years, with the rapidly aging population, alleviating the pressure on medical staff has become a critical issue. To improve the work efficiency of medical staff and reduce the risk of infection, we consider the multi-trip autonomous mobile robot (AMR) routing problem [...] Read more.
In recent years, with the rapidly aging population, alleviating the pressure on medical staff has become a critical issue. To improve the work efficiency of medical staff and reduce the risk of infection, we consider the multi-trip autonomous mobile robot (AMR) routing problem in a stochastic environment. Our goal is to minimize the total expected operating cost and maximize the total service quality for patients, ensuring that each route violates the vehicle capacity and the time window with only a minimal probability. The travel time of AMRs is stochastically affected by the surrounding environment; the demand for each ward is unknown until the AMR reaches the ward, and the service time is linearly related to the actual demand. We developed a population-based tabu search algorithm (PTS) that combines the genetic algorithm with the tabu search algorithm to solve this problem. Extensive numerical experiments were conducted on the modified Solomon instances to demonstrate the efficiency of the PTS algorithm and reveal the impacts of the confidence level on the optimal solution, providing insights for decision-makers to devise delivery schemes that balance operating costs with patient satisfaction. Full article
Show Figures

Figure 1

Figure 1
<p>The 2-opt operator: (<b>a</b>) before using the 2-opt operator; (<b>b</b>) after using the 2-opt operator.</p>
Full article ">Figure 2
<p>Relocation operator: (<b>a</b>) before using the relocation operator; (<b>b</b>) after using the relocation operator.</p>
Full article ">Figure 3
<p>Depot insertion operator: (<b>a</b>) the current route is infeasible; (<b>b</b>) feasible route after repair.</p>
Full article ">Figure 4
<p>Crossover operator.</p>
Full article ">Figure 5
<p>Swap operator.</p>
Full article ">Figure 6
<p>The mean S/N ratio plot for the PTS algorithm.</p>
Full article ">Figure 7
<p>Experiment to determine the population setting of the PTS algorithm: (<b>a</b>) C101 instance; (<b>b</b>) C201 instance; (<b>c</b>) R101 instance; (<b>d</b>) R201 instance; (<b>e</b>) RC101 instance; (<b>f</b>) RC201 instance.</p>
Full article ">Figure 7 Cont.
<p>Experiment to determine the population setting of the PTS algorithm: (<b>a</b>) C101 instance; (<b>b</b>) C201 instance; (<b>c</b>) R101 instance; (<b>d</b>) R201 instance; (<b>e</b>) RC101 instance; (<b>f</b>) RC201 instance.</p>
Full article ">Figure 8
<p>The impact of changing confidence levels: (<b>a</b>) the impact of confidence levels of types C1, R1, and RC1 instances on TDS; (<b>b</b>) the impact of confidence levels of types C2, R2, and RC2 instances on TDS; (<b>c</b>) the impact of confidence levels of types C1, R1, and RC1 instances on the number of AMR; (<b>d</b>) the impact of confidence levels of types C2, R2, and RC2 instances on the number of AMR.</p>
Full article ">
21 pages, 3963 KiB  
Article
Empowering Clinical Engineering and Evidence-Based Maintenance with IoT and Indoor Navigation
by Alessio Luschi, Giovanni Luca Daino, Gianpaolo Ghisalberti, Vincenzo Mezzatesta and Ernesto Iadanza
Future Internet 2024, 16(8), 263; https://doi.org/10.3390/fi16080263 - 25 Jul 2024
Viewed by 988
Abstract
The OHIO (Odin Hospital Indoor cOmpass) project received funding from the European Union’s Horizon 2020 research and innovation action program, via ODIN–Open Call, which is issued and executed under the ODIN project and focuses on enhancing hospital safety, productivity, and quality by introducing [...] Read more.
The OHIO (Odin Hospital Indoor cOmpass) project received funding from the European Union’s Horizon 2020 research and innovation action program, via ODIN–Open Call, which is issued and executed under the ODIN project and focuses on enhancing hospital safety, productivity, and quality by introducing digital solutions, such as the Internet of Things (IoT), robotics, and artificial intelligence (AI). OHIO aims to enhance the productivity and quality of medical equipment maintenance activities within the pilot hospital, “Le Scotte” in Siena (Italy), by leveraging internal informational resources. OHIO will also be completely integrated with the ODIN platform, taking advantage of the available services and functionalities. OHIO exploits Bluetooth Low Energy (BLE) tags and antennas together with the resources provided by the ODIN platform to develop a complex ontology-based IoT framework, which acts as a central cockpit for the maintenance of medical equipment through a central management web application and an indoor real-time location system (RTLS) for mobile devices. The application programmable interfaces (APIs) are based on REST architecture for seamless data exchange and integration with the hospital’s existing computer-aided facility management (CAFM) and computerized maintenance management system (CMMS) software. The outcomes of the project are assessed both with quantitative and qualitative methods, by evaluating key performance indicators (KPIs) extracted from the literature and performing a preliminary usability test on both the whole system and the graphic user interfaces (GUIs) of the developed applications. The test implementation demonstrates improvements in maintenance timings, including a reduction in maintenance operation delays, duration of maintenance tasks, and equipment downtime. Usability post-test questionnaires show positive feedback regarding the usability and effectiveness of the applications. The OHIO framework enhanced the effectiveness of medical equipment maintenance by integrating existing software with newly designed, enhanced interfaces. The research also indicates possibilities for scaling up the developed methods and applications to additional large-scale pilot hospitals within the ODIN network. Full article
Show Figures

Figure 1

Figure 1
<p>A screen capture from SPOT.</p>
Full article ">Figure 2
<p>HiWay in off-site mode (<b>left</b>) allows saving custom routes for planning scopes, before reaching the premises. Routes can then be loaded once they arrive on-site and be used to obtain directions in real time (<b>middle</b> and <b>right</b>).</p>
Full article ">Figure 3
<p>Position of the installed BLE beacon on the ground floor of the hospital. The highlighted green area shows the good quality of the Bluetooth coverage.</p>
Full article ">Figure 4
<p>Fingerprinting quality for the RTLS. (<b>a</b>) Magnetic mapping quality. (<b>b</b>) WiFi environment quality. (<b>c</b>) Beacon environment quality.</p>
Full article ">Figure 5
<p>Schema of the ODIN platform.</p>
Full article ">Figure 6
<p>The ODIN ontology.</p>
Full article ">Figure 7
<p>Schema illustrating the communication among the various components of OHIO.</p>
Full article ">Figure 8
<p>OHIO management web application interface. The devices associated with the last two work orders are currently not collected in the EEH (the green check mark is missing).</p>
Full article ">Figure 9
<p>HiWay mobile application (1). (<b>a</b>) The collecting room code is highlighted in green, and the medical device the work order is referring to is placed inside that filter zone and can be accessed by the technician. (<b>b</b>) The navigation module shows the shortest route from the current position to the target room, enabling real-time navigation.</p>
Full article ">Figure 10
<p>HiWay mobile application (2). (<b>a</b>) The technician can access all the documents related to the inspected medical equipment by leveraging the connection between HiWay and the SPOT document manager. (<b>b</b>) Once a technician closes a work order, he is forced to select a fault class among the 10 available ones to classify the maintenance.</p>
Full article ">Figure 11
<p>User satisfaction questionnaire. Responses in a range from 1 (strongly disagree) to 5 (strongly agree) are evaluated as the frequency count and percentage obtained for each question.</p>
Full article ">
11 pages, 5434 KiB  
Article
An Innovative Device Based on Human-Machine Interface (HMI) for Powered Wheelchair Control for Neurodegenerative Disease: A Proof-of-Concept
by Arrigo Palumbo, Nicola Ielpo, Barbara Calabrese, Remo Garropoli, Vera Gramigna, Antonio Ammendolia and Nicola Marotta
Sensors 2024, 24(15), 4774; https://doi.org/10.3390/s24154774 - 23 Jul 2024
Viewed by 901
Abstract
In the global context, advancements in technology and science have rendered virtual, augmented, and mixed-reality technologies capable of transforming clinical care and medical environments by offering enhanced features and improved healthcare services. This paper aims to present a mixed reality-based system to control [...] Read more.
In the global context, advancements in technology and science have rendered virtual, augmented, and mixed-reality technologies capable of transforming clinical care and medical environments by offering enhanced features and improved healthcare services. This paper aims to present a mixed reality-based system to control a robotic wheelchair for people with limited mobility. The test group comprised 11 healthy subjects (six male, five female, mean age 35.2 ± 11.7 years). A novel platform that integrates a smart wheelchair and an eye-tracking-enabled head-mounted display was proposed to reduce the cognitive requirements needed for wheelchair movement and control. The approach’s effectiveness was demonstrated by evaluating our system in realistic scenarios. The demonstration of the proposed AR head-mounted display user interface for controlling a smart wheelchair and the results provided in this paper could highlight the potential of the HoloLens 2-based innovative solutions and bring focus to emerging research topics, such as remote control, cognitive rehabilitation, the implementation of patient autonomy with severe disabilities, and telemedicine. Full article
(This article belongs to the Special Issue Computational Intelligence Based-Brain-Body Machine Interface)
Show Figures

Figure 1

Figure 1
<p>The prototype system architecture.</p>
Full article ">Figure 2
<p>HoloLens 2 device.</p>
Full article ">Figure 3
<p>Using the virtual keyboard via Eye-Tracking (view from the HoloLens 2).</p>
Full article ">Figure 4
<p>Yousefi et al. [<a href="#B10-sensors-24-04774" class="html-bibr">10</a>] circuit.</p>
Full article ">Figure 5
<p>Test outdoor course for the training.</p>
Full article ">
18 pages, 12880 KiB  
Article
Low-Cost 3D Indoor Visible Light Positioning: Algorithms and Experimental Validation
by Sanjha Khan, Josep Paradells and Marisa Catalan
Photonics 2024, 11(7), 626; https://doi.org/10.3390/photonics11070626 - 29 Jun 2024
Viewed by 900
Abstract
Visible light technology presents significant advancement for indoor IoT applications. These systems offer enhanced bit rate transmission, enabling faster and reliable data transfer. Moreover, optical-based visible light systems facilitate improved location services within indoor environments. However, many of these systems still exhibit limited [...] Read more.
Visible light technology presents significant advancement for indoor IoT applications. These systems offer enhanced bit rate transmission, enabling faster and reliable data transfer. Moreover, optical-based visible light systems facilitate improved location services within indoor environments. However, many of these systems still exhibit limited accuracy within several centimeters, even when relying on costly high-resolution cameras. This paper introduces a novel low-cost visible light system for 3D positioning, designed to enhance indoor positioning accuracy using low-resolution images. Initially, we propose a non-integer pixel (NI-P) algorithm to enhance precision without the need for higher-resolution images. This algorithm allows the system to identify the precise light spot coordinates on the low-resolution images, enabling accurate positioning. Subsequently, we present an algorithm leveraging the precise coordinate data from the previous step to determine the 3D position of objects even in front of errors in the measures. Benefiting from high accuracy, reduced cost, and low complexity, the proposed system is suitable for implementation on low-end hardware platforms, thereby increasing the versatility and feasibility of visible light technologies in indoor settings. Experimental results show an average 2D positioning error of 1.08 cm and 3D error within 1.4 cm at 2.3 m separation between the object and camera, achieved with an average positioning time of 20 ms on a low-end embedded device. Consequently, the proposed system offers fast and highly accurate indoor positioning and tracking capabilities, making it suitable for applications like mobile robots, automated guided vehicles, and indoor parking management. Furthermore, it is easy to deploy and does not require re-calibration. Full article
Show Figures

Figure 1

Figure 1
<p>VLP system design with multiple cameras on ceiling and a light source on other side.</p>
Full article ">Figure 2
<p>The actual light spot at a non-integer place appearing at the integer pixel grid; the colors on the image grid represent the intensity level.</p>
Full article ">Figure 3
<p>Example pixel plotting on image grid.</p>
Full article ">Figure 4
<p>Non-integer pixel coordinate.</p>
Full article ">Figure 5
<p>Calculating procedure for vertical and horizontal coordinates.</p>
Full article ">Figure 6
<p>Positioning plane.</p>
Full article ">Figure 7
<p>Detail of triangle <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>1</mn> </msub> <msub> <mi>C</mi> <mn>2</mn> </msub> <msup> <mi>L</mi> <mo>′</mo> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Real experimental setup (<b>a</b>) Three cameras fixed on the ceiling. (<b>b</b>) LED on the floor at regular measured grid.</p>
Full article ">Figure 9
<p>Positioning errors from three approaches. (<b>a</b>) Highest integer, (<b>b</b>) mean integer, and (<b>c</b>) NI-P.</p>
Full article ">Figure 10
<p>(<b>a</b>) Comparative analysis of PE; (<b>b</b>) test point map.</p>
Full article ">Figure 11
<p>Cumulative distribution of PE.</p>
Full article ">Figure 12
<p>Distribution of positioning errors for camera pairs C1C2, C2C3, and C1C3.</p>
Full article ">Figure 13
<p>Three-dimensional positioning errors corresponding to case (<b>a</b>–<b>c</b>).</p>
Full article ">Figure 14
<p>The 3D positioning error versus the number of cameras.</p>
Full article ">Figure 15
<p>The 3D positioning error with different camera arrangements. (<b>a</b>) Straight line arrangement; (<b>b</b>) triangular arrangement.</p>
Full article ">Figure 16
<p>Three-dimensional positioning error at (<b>a</b>) h = 0 cm; (<b>b</b>) h = 15 cm; (<b>c</b>) h = 35 cm; (<b>d</b>) h = 70 cm.</p>
Full article ">
17 pages, 5345 KiB  
Article
Application of Improved Sliding Mode and Artificial Neural Networks in Robot Control
by Duc-Anh Pham, Jong-Kap Ahn and Seung-Hun Han
Appl. Sci. 2024, 14(12), 5304; https://doi.org/10.3390/app14125304 - 19 Jun 2024
Viewed by 854
Abstract
Mobile robots are autonomous devices capable of self-motion, and are utilized in applications ranging from surveillance and logistics to healthcare services and planetary exploration. Precise trajectory tracking is a crucial component in robotic applications. This study introduces the use of improved sliding surfaces [...] Read more.
Mobile robots are autonomous devices capable of self-motion, and are utilized in applications ranging from surveillance and logistics to healthcare services and planetary exploration. Precise trajectory tracking is a crucial component in robotic applications. This study introduces the use of improved sliding surfaces and artificial neural networks in controlling mobile robots. An enhanced sliding surface, combined with exponential and hyperbolic tangent approach laws, is employed to mitigate chattering phenomena in sliding mode control. Nonlinear components of the sliding control law are estimated using artificial neural networks. The weights of the neural networks are updated online using a gradient descent algorithm. The stability of the system is demonstrated using Lyapunov theory. Simulation results in MATLAB/Simulink R2024a validate the effectiveness of the proposed method, with rise times of 0.071 s, an overshoot of 0.004%, and steady-state errors approaching zero meters. Settling times were 0.0978 s for the x-axis and 0.0902 s for the y-axis, and chattering exhibited low amplitude and frequency. Full article
Show Figures

Figure 1

Figure 1
<p>The robot model.</p>
Full article ">Figure 2
<p>Schematic structure of the sliding controller based on an improved sliding mode control.</p>
Full article ">Figure 3
<p>RBF neural network structure.</p>
Full article ">Figure 4
<p>MATLAB/Simulink diagram simulating the ISMC-RBF controller.</p>
Full article ">Figure 5
<p>Response of the ISMC-RBF between <span class="html-italic">x<sub>d</sub></span> and <span class="html-italic">x<sub>W</sub></span>, and <span class="html-italic">y<sub>d</sub></span> and <span class="html-italic">y<sub>W</sub></span> with the Tricuspoid curve.</p>
Full article ">Figure 6
<p>Control signals <span class="html-italic"><b>u</b></span> of the ISMC-RBF.</p>
Full article ">Figure 7
<p>Sliding surfaces <span class="html-italic"><b>s</b></span>.</p>
Full article ">Figure 8
<p>Trajectory response of the ISMC-RBF controller for the robot with the Tricuspoid curve.</p>
Full article ">Figure 9
<p>Trajectory response of the ISMC-RBF controller for the robot with the Tricuspoid curve under noise.</p>
Full article ">Figure 10
<p>Trajectory response of the ISMC-RBF controller for the robot with the Lissajous curve under noise.</p>
Full article ">
16 pages, 4116 KiB  
Article
A Framework for Auditing Robot-Inclusivity of Indoor Environments Based on Lighting Condition
by Zimou Zeng, Matthew S. K. Yeo, Charan Satya Chandra Sairam Borusu, M. A. Viraj J. Muthugala, Michael Budig, Mohan Rajesh Elara and Yixiao Wang
Buildings 2024, 14(4), 1110; https://doi.org/10.3390/buildings14041110 - 16 Apr 2024
Cited by 1 | Viewed by 875
Abstract
Mobile service robots employ vision systems to discern objects in their workspaces for navigation or object detection. The lighting conditions of the surroundings affect a robot’s ability to discern and navigate in its work environment. Robot inclusivity principles can be used to determine [...] Read more.
Mobile service robots employ vision systems to discern objects in their workspaces for navigation or object detection. The lighting conditions of the surroundings affect a robot’s ability to discern and navigate in its work environment. Robot inclusivity principles can be used to determine the suitability of a site’s lighting condition for robot performance. This paper proposes a novel framework for autonomously auditing the Robot Inclusivity Index of indoor environments based on the lighting condition (RII-lux). The framework considers the factors of light intensity and the presence of glare to define the RII-Lux of a particular location in an environment. The auditing framework is implemented on a robot to autonomously generate a heatmap visually representing the variation in RII-Lux of an environment. The applicability of the proposed framework for generating true-to-life RII-Lux heatmaps has been validated through experimental results. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

Figure 1
<p>Overview of RII-Lux process.</p>
Full article ">Figure 2
<p>The process of glare detection.</p>
Full article ">Figure 3
<p>(<b>a</b>–<b>e</b>) are sample images used and their corresponding glare values before cropping. The manual calculations for each image’s glare percentages are also given here for comparison.</p>
Full article ">Figure 4
<p>Meerkat audit robot and its specifications.</p>
Full article ">Figure 5
<p>Meerkat software architecture.</p>
Full article ">Figure 6
<p>2D Lidar map of the site 1: printing room site. The location and orientation of the LED spotlight is depicted by the lightbulb symbol at right of diagram.</p>
Full article ">Figure 7
<p>View of the site 1: printing room. (<b>a</b>) dark area (<b>b</b>) illuminated area.</p>
Full article ">Figure 8
<p>Lux heatmap for site 1.</p>
Full article ">Figure 9
<p>Glare map for site 1. Arrows indicate the glare detected instances and the direction.</p>
Full article ">Figure 10
<p>Generated RII-Lux heatmap. (<b>a</b>–<b>c</b>) annotate the location chosen for object-detection test setup.</p>
Full article ">Figure 11
<p>Detection results during the validation of site 1; (<b>a</b>–<b>c</b>) are the locations ‘a’, ‘b’, and ‘c’ in <a href="#buildings-14-01110-f010" class="html-fig">Figure 10</a>.</p>
Full article ">Figure 12
<p>2D Lidar map of site 2: Mock Living Space, location and orientation of the spotlight depicted by the lightbulb symbol.</p>
Full article ">Figure 13
<p>Views of the site 2: mock living space site. (<b>a</b>) darkened room; (<b>b</b>) room with spotlight; (<b>c</b>) living room with typical lighting conditions.</p>
Full article ">Figure 14
<p>Lux heatmap for site 2.</p>
Full article ">Figure 15
<p>Glare map for site 2. Arrowheads indicate the glare-detected instances and the direction. Arrowheads are grouped together into the numbered arrow clusters for reference in text.</p>
Full article ">Figure 16
<p>Generated RII-lux heatmap for site 2. (<b>a</b>–<b>c</b>) annotate the location chosen for object-detection test setup.</p>
Full article ">Figure 17
<p>Detection results during the validation in site 2; (<b>a</b>) darkened room, (<b>b</b>) objects backlit by LED spotlight, (<b>c</b>) typical indoor lighting conditions.</p>
Full article ">
20 pages, 1775 KiB  
Article
Real-Time Traffic Light Recognition with Lightweight State Recognition and Ratio-Preserving Zero Padding
by Jihwan Choi and Harim Lee
Electronics 2024, 13(3), 615; https://doi.org/10.3390/electronics13030615 - 1 Feb 2024
Cited by 1 | Viewed by 1535
Abstract
As online shopping is becoming mainstream, driven by the social impact of Coronavirus disease-2019 (COVID-19) as well as the development of Internet services, the demand for autonomous delivery mobile robots is rapidly increasing. This trend has brought the autonomous mobile robot market to [...] Read more.
As online shopping is becoming mainstream, driven by the social impact of Coronavirus disease-2019 (COVID-19) as well as the development of Internet services, the demand for autonomous delivery mobile robots is rapidly increasing. This trend has brought the autonomous mobile robot market to a new turning point, with expectations that numerous mobile robots will be driving on roads with traffic. To achieve these expectations, autonomous mobile robots should precisely perceive the situation on roads with traffic. In this paper, we revisit and implement a real-time traffic light recognition system with a proposed lightweight state recognition network and ratio-preserving zero padding, which is a two-stage system consisting of a traffic light detection (TLD) module and a traffic light status recognition (TLSR) module. For the TLSR module, this work proposes a lightweight state recognition network with a small number of weight parameters, because the TLD module needs more weight parameters to find the exact location of traffic lights. Then, the proposed effective and lightweight network architecture is constructed by using skip connection, multifeature maps with different sizes, and kernels of appropriately tuned sizes. Therefore, the network has a negligible impact on the overall processing time and minimal weight parameters while maintaining high performance. We also propose to utilize a ratio-preserving zero padding method for data preprocessing for the TLSR module to enhance recognition accuracy. For the TLD module, extensive evaluations with varying input sizes and backbone network types are conducted, and then appropriate values for those factors are determined, which strikes a balance between detection performance and processing time. Finally, we demonstrate that our traffic light recognition system, utilizing the TLD module’s determined parameters, the proposed network architecture for the TLSR module, and the ratio-preserving zero padding method can reliably detect the location and state of traffic lights in real-world videos recorded in Gumi and Deagu, Korea, while maintaining at least 30 frames per second for real-time operation. Full article
(This article belongs to the Special Issue Intelligence Control and Applications of Intelligence Robotics)
Show Figures

Figure 1

Figure 1
<p>Block diagram of the overall system: the TLD module detects traffic lights and the TLSR module recognizes the status of the detected traffic lights.</p>
Full article ">Figure 2
<p>Network architectures for the TLSR module: (<b>c</b>–<b>e</b>) Subnetwork structures. (<b>a</b>) ‘CONV’ and ‘CONV-FIT’; (<b>b</b>) ‘FPN’, ‘FPN-FIT’, ‘FPN-RES’, and ‘FPN-RES-FIT’; (<b>c</b>) For’FPN’ and ‘FPN-FIT’; (<b>d</b>) ‘FPN-RES’; (<b>e</b>) ‘FPN-RES-FIT’.</p>
Full article ">Figure 3
<p>Comparison of various resizing approaches.</p>
Full article ">Figure 4
<p>The architecture of the TLD module.</p>
Full article ">Figure 5
<p>HF scores for traffic light classes.</p>
Full article ">Figure 6
<p>Comparison of HF scores of FPN-RES-FIT with and without ratio-preserving zero padding.</p>
Full article ">Figure 7
<p>Precision–recall curves of the TLD module.</p>
Full article ">Figure 8
<p>Example showing the difference in performance between 640 and 960 input image sizes.</p>
Full article ">Figure 9
<p>Demonstration images using videos taken on various domestic traffic roads.</p>
Full article ">Figure 10
<p>Resulting images for several weather conditions: (<b>a</b>) brightness, (<b>b</b>) darkness, and (<b>c</b>) cloudiness and rain.</p>
Full article ">
9 pages, 1126 KiB  
Proceeding Paper
A Review of Recent Developments in 6G Communications Systems
by Srikanth Kamath, Somilya Anand, Suyash Buchke and Kaushikee Agnihotri
Eng. Proc. 2023, 59(1), 167; https://doi.org/10.3390/engproc2023059167 - 17 Jan 2024
Cited by 1 | Viewed by 1884
Abstract
Currently, we exist in the 5G division of the wireless technology cycle, where the standardization is complete and deployment is being carried out. However, 5G networks do not have the capacity to deliver an automated and intelligent network that supports connected intelligence. 6G [...] Read more.
Currently, we exist in the 5G division of the wireless technology cycle, where the standardization is complete and deployment is being carried out. However, 5G networks do not have the capacity to deliver an automated and intelligent network that supports connected intelligence. 6G is what enables this, and globally, countries are aiming to lay the foundation for the communication needs of 2030. This brings out a very key question and discussion on how wireless communications will develop in the future, particularly adapting to the range and set of applications and user cases. Industry and academic efforts have started to explore beyond 5G and uncover 6G as 5G becomes more internationally accessible. We forecast that 6G will undergo a transition that is unheard of in the history of wireless cellular systems. 6G exists beyond mobile internet and will be required to support omnipresent AI services from the network’s core to its endpoints. Meanwhile, artificial intelligence (AI) will be crucial for developing and improving 6G designs, protocols, and operations. URLLC plays a crucial role in next-generation communication systems, particularly in 6G, for applications requiring ultra-low latency and reliability. These services support cutting-edge technologies like driverless vehicles, remote robotic surgery, smart factories, and augmented reality applications. URLLCs ensure robust connectivity and real-time responsiveness, enabling time-sensitive and safety-critical services in 6G communication infrastructures. This article illustrates the importance of URLLCs in 6G and their integration with deep learning, the security challenges, and their potential solutions. Further on, it establishes its relationship with key aspects of federated learning and security in the 6G domain. Full article
(This article belongs to the Proceedings of Eng. Proc., 2023, RAiSE-2023)
Show Figures

Figure 1

Figure 1
<p>Evolution of Wireless Technology.</p>
Full article ">Figure 2
<p>An illustration of the concept of Federated Learning [<a href="#B1-engproc-59-00167" class="html-bibr">1</a>].</p>
Full article ">Figure 3
<p>KPI vs. Requirement for 6G.</p>
Full article ">
Back to TopTop