Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,116)

Search Parameters:
Keywords = drone

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 4677 KiB  
Article
Enhanced Transparency and Resistive Switching Characteristics in AZO/HfO2/Ti RRAM Device via Post Annealing Process
by Yuseong Jang, Chanmin Hwang, Sanggyu Bang and Hee-Dong Kim
Inorganics 2024, 12(12), 299; https://doi.org/10.3390/inorganics12120299 - 21 Nov 2024
Abstract
As interest in transparent electronics increases, ensuring the reliability of transparent RRAM (T-RRAM) devices, which can be used to construct transparent electronics, has become increasingly important. However, defects and traps within these T-RRAM devices can degrade their reliability. In this study, we investigated [...] Read more.
As interest in transparent electronics increases, ensuring the reliability of transparent RRAM (T-RRAM) devices, which can be used to construct transparent electronics, has become increasingly important. However, defects and traps within these T-RRAM devices can degrade their reliability. In this study, we investigated the improvement of transparency and reliability of T-RRAM devices with an AZO/HfO2/Ti structure through rapid thermal annealing (RTA) at 450 °C for 60 s in a nitrogen atmosphere. The device without RTA exhibited a low transmittance of 30%, whereas the device with RTA showed a significantly higher transmittance of over 75%. Furthermore, the device operated at lower current levels after RTA, which resulted in a reduction in its operating voltages, and the forming, setting, and reset voltages changed from 3.3, 2.4, and −5.1 V, respectively, to 2, 1, and −2.7 V. This led to an improvement in the endurance characteristics of the device, which thereby suggests that these improvements can be attributed to a reduction in the defects and trap density within the T-RRAM device caused by RTA. Full article
(This article belongs to the Special Issue Optical and Quantum Electronics: Physics and Materials)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Schematic structure, (<b>b</b>) cross-section FE-SEM images, and (<b>c</b>) transmittance at wavelengths of 200 nm to 1100 nm of the proposed T-RRAM.</p>
Full article ">Figure 2
<p>Resistive switching characteristics of proposed T-RRAM (<b>a</b>) without RTA and (<b>b</b>) after RTA at 450 °C for 60 s in a nitrogen atmosphere. (<b>c</b>) Retention and (<b>d</b>) endurance characteristics of T-RRAM without RTA and after RTA.</p>
Full article ">Figure 3
<p>XPS spectra of the Ti 2p region of the Ti top electrode without RTA and after RTA.</p>
Full article ">Figure 4
<p>XRD patterns of as-deposited HfO<sub>2</sub> and Ti films and HfO<sub>2</sub> and Ti films after RTA, and (inset) average grain size of the HfO<sub>2</sub> and Ti films.</p>
Full article ">Figure 5
<p>SCLC mechanism at the positive bias of proposed T-RRAM (<b>a</b>) without RTA and (<b>b</b>) after RTA.</p>
Full article ">Figure 6
<p>Band diagram of proposed T-RRAM (<b>a</b>) in the low-voltage region, (<b>b</b>) medium-voltage region and (<b>c</b>) high-voltage region. (The orange arrows indicate the direction of the electric field).</p>
Full article ">Figure 7
<p>Nyquist plot of proposed T-RRAM at (<b>a</b>) HRS and (<b>b</b>) LRS.</p>
Full article ">Figure 8
<p>Equivalent circuit of proposed T-RRAM at (<b>a</b>) HRS and (<b>b</b>) LRS.</p>
Full article ">
20 pages, 13179 KiB  
Article
A Study on the Monitoring of Floating Marine Macro-Litter Using a Multi-Spectral Sensor and Classification Based on Deep Learning
by Youchul Jeong, Jisun Shin, Jong-Seok Lee, Ji-Yeon Baek, Daniel Schläpfer, Sin-Young Kim, Jin-Yong Jeong and Young-Heon Jo
Remote Sens. 2024, 16(23), 4347; https://doi.org/10.3390/rs16234347 - 21 Nov 2024
Abstract
Increasing global plastic usage has raised critical concerns regarding marine pollution. This study addresses the pressing issue of floating marine macro-litter (FMML) by developing a novel monitoring system using a multi-spectral sensor and drones along the southern coast of South Korea. Subsequently, a [...] Read more.
Increasing global plastic usage has raised critical concerns regarding marine pollution. This study addresses the pressing issue of floating marine macro-litter (FMML) by developing a novel monitoring system using a multi-spectral sensor and drones along the southern coast of South Korea. Subsequently, a convolutional neural network (CNN) model was utilized to classify four distinct marine litter materials: film, fiber, fragment, and foam. Automatic atmospheric correction with the drone data atmospheric correction (DROACOR) method, which is specifically designed for currently available drone-based sensors, ensured consistent reflectance across altitudes in the FMML dataset. The CNN models exhibited promising performance, with precision, recall, and F1 score values of 0.9, 0.88, and 0.89, respectively. Furthermore, gradient-weighted class activation mapping (Grad-CAM), an object recognition technique, allowed us to interpret the classification performance. Overall, this study will shed light on successful FMML identification using multi-spectral observations for broader applications in diverse marine environments. Full article
(This article belongs to the Special Issue Recent Progress in UAV-AI Remote Sensing II)
Show Figures

Figure 1

Figure 1
<p>The overall workflow shows the processes that led to the classification of FMML using drone-acquired data and deep learning models. We performed three steps: (1) FMML exploration; (2) data processing for the deep learning models; and (3) deep learning to process FMML classification and visualization.</p>
Full article ">Figure 2
<p>The study location on Gadeok Island in South Korea and the data acquisition location of the drone surveys in the study area in drone-based imagery (red rectangle). Maps of the study area and a Pix4Dmapper image were used to illustrate the data acquisition.</p>
Full article ">Figure 3
<p>FMML dataset of images captured by the drone in the study area.</p>
Full article ">Figure 4
<p>CNN architecture for the classification of FMML. The training, validation, and test sets comprised FMML datasets as input. The input image size was 128 × 128 × 5. The output was labeled as film, fiber, fragment, and foam for the FMML. This network consisted of input, feature learning, classification, and output.</p>
Full article ">Figure 5
<p>Reflectance analysis of flight altitude through atmospheric correction. (<b>a</b>) A multi-spectral image was obtained on 29 March 2023 (true color RGB; R: 668 nm; G: 560 nm; B: 475 nm; a 51 m flight altitude). Images for atmospheric correction were acquired at altitudes of 23, 51, 70, 101, 127, 146, and 170 m. (<b>b</b>) The image values for each altitude of the orange film buoy image before atmospheric correction were compared. (<b>c</b>) The reflectance for each altitude of the orange film buoy image using a DROACOR atmospheric correction processor were compared.</p>
Full article ">Figure 6
<p>Spectra of all FMML lists in the dataset from the DROACOR-calculated reflectance.</p>
Full article ">Figure 7
<p>A confusion matrix of the CNN-3 model (<span class="html-italic">x</span>-axis: recall; <span class="html-italic">y</span>-axis: precision). The green box indicates correct classification by the model, and the red box indicates incorrect classification.</p>
Full article ">Figure 8
<p>Visualization of FMML using Grad-CAM on CNN-3 model. (<b>a</b>–<b>d</b>) Confident detections of FMML dataset labels. (<b>e</b>–<b>h</b>) Unconfident detections of FMML dataset labels.</p>
Full article ">Figure 9
<p>The well-classified and misclassified results of each category in the CNN-3 Model. All the images are Micasense multi-spectral images of band five. (<b>a</b>–<b>d</b>) Classified as fiber. (<b>e</b>–<b>h</b>) Classified as film. (<b>i</b>–<b>l</b>) Classified as foam. (<b>m</b>–<b>p</b>) Classified as fragment. Green and red circles indicate well-classified and misclassified results, respectively.</p>
Full article ">
16 pages, 4570 KiB  
Article
Study of the Possibility to Combine Deep Learning Neural Networks for Recognition of Unmanned Aerial Vehicles in Optoelectronic Surveillance Channels
by Vladislav Semenyuk, Ildar Kurmashev, Dmitriy Alyoshin, Liliya Kurmasheva, Vasiliy Serbin and Alessandro Cantelli-Forti
Modelling 2024, 5(4), 1773-1788; https://doi.org/10.3390/modelling5040092 - 21 Nov 2024
Abstract
This article explores the challenges of integrating two deep learning neural networks, YOLOv5 and RT-DETR, to enhance the recognition of unmanned aerial vehicles (UAVs) within the optical-electronic channels of Sensor Fusion systems. The authors conducted an experimental study to test YOLOv5 and Faster [...] Read more.
This article explores the challenges of integrating two deep learning neural networks, YOLOv5 and RT-DETR, to enhance the recognition of unmanned aerial vehicles (UAVs) within the optical-electronic channels of Sensor Fusion systems. The authors conducted an experimental study to test YOLOv5 and Faster RT-DETR in order to identify the average accuracy of UAV recognition. A dataset in the form of images of two classes of objects, UAVs, and birds, was prepared in advance. The total number of images, including augmentation, amounted to 6337. The authors implemented training, verification, and testing of the neural networks exploiting PyCharm 2024 IDE. Inference testing was conducted using six videos with UAV flights. On all test videos, RT-DETR-R50 was more accurate by an average of 18.7% in terms of average classification accuracy (Pc). In terms of operating speed, YOLOv5 was 3.4 ms more efficient. It has been established that the use of RT-DETR as the only module for UAV classification in optical-electronic detection channels is not effective due to the large volumes of calculations, which is due to the relatively large number of parameters. Based on the obtained results, an algorithm for combining two neural networks is proposed, which allows for increasing the accuracy of UAV and bird classification without significant losses in speed. Full article
Show Figures

Figure 1

Figure 1
<p>Data set preparation in Roboflow.com service: (<b>a</b>) Annotation of UAVs and birds; (<b>b</b>) Data set partitioning interface for training, validation, and testing of neural networks.</p>
Full article ">Figure 2
<p>Metrics of the results of training the YOLOv5 neural network for 100 epochs (O<span class="html-italic">x</span>-axis): (<b>a</b>) Precision; (<b>b</b>) Recall; (<b>c</b>) mAP50; (<b>d</b>) mAP50-95.</p>
Full article ">Figure 2 Cont.
<p>Metrics of the results of training the YOLOv5 neural network for 100 epochs (O<span class="html-italic">x</span>-axis): (<b>a</b>) Precision; (<b>b</b>) Recall; (<b>c</b>) mAP50; (<b>d</b>) mAP50-95.</p>
Full article ">Figure 3
<p>Metrics of the results of training the RT-DETR neural network for 100 epochs (axis Ox): (<b>a</b>) Precision; (<b>b</b>) Recall; (<b>c</b>) mAP50; (<b>d</b>) mAP50-95.</p>
Full article ">Figure 3 Cont.
<p>Metrics of the results of training the RT-DETR neural network for 100 epochs (axis Ox): (<b>a</b>) Precision; (<b>b</b>) Recall; (<b>c</b>) mAP50; (<b>d</b>) mAP50-95.</p>
Full article ">Figure 4
<p>Example of data obtained as a result of validation of the YOLOv5 experimental model.</p>
Full article ">Figure 5
<p>Example of data obtained from the validation of the RT-DETR experimental model.</p>
Full article ">Figure 6
<p>Frames from inference tests of trained neural network models: (<b>a</b>,<b>c</b>) RT-DETR-R50; (<b>b</b>,<b>d</b>) YOLOv5s.</p>
Full article ">Figure 6 Cont.
<p>Frames from inference tests of trained neural network models: (<b>a</b>,<b>c</b>) RT-DETR-R50; (<b>b</b>,<b>d</b>) YOLOv5s.</p>
Full article ">Figure 7
<p>Comparative diagram of the values of the average class probability in UAV recognition by trained neural network models.</p>
Full article ">Figure 8
<p>Algorithm for combining trained neural network models YOLOv5s and RT-DETR-R50.</p>
Full article ">
27 pages, 7620 KiB  
Article
Maturity Prediction in Soybean Breeding Using Aerial Images and the Random Forest Machine Learning Algorithm
by Osvaldo Pérez, Brian Diers and Nicolas Martin
Remote Sens. 2024, 16(23), 4343; https://doi.org/10.3390/rs16234343 - 21 Nov 2024
Viewed by 47
Abstract
Several studies have used aerial images to predict physiological maturity (R8 stage) in soybeans (Glycine max (L.) Merr.). However, information for making predictions in the current growing season using models fitted in previous years is still necessary. Using the Random Forest machine [...] Read more.
Several studies have used aerial images to predict physiological maturity (R8 stage) in soybeans (Glycine max (L.) Merr.). However, information for making predictions in the current growing season using models fitted in previous years is still necessary. Using the Random Forest machine learning algorithm and time series of RGB (red, green, blue) and multispectral images taken from a drone, this work aimed to study, in three breeding experiments of plant rows, how maturity predictions are impacted by a number of factors. These include the type of camera used, the number and time between flights, and whether models fitted with data obtained in one or more environments can be used to make accurate predictions in an independent environment. Applying principal component analysis (PCA), it was found that compared to the full set of 8–10 flights (R2 = 0.91–0.94; RMSE = 1.8–1.3 days), using data from three to five fights before harvest had almost no effect on the prediction error (RMSE increase ~0.1 days). Similar prediction accuracy was achieved using either a multispectral or an affordable RGB camera, and the excess green index (ExG) was found to be the important feature in making predictions. Using a model trained with data from two previous years and using fielding notes from check cultivars planted in the test season, the R8 stage was predicted, in 2020, with an error of 2.1 days. Periodically adjusted models could help soybean breeding programs save time when characterizing the cycle length of thousands of plant rows each season. Full article
Show Figures

Figure 1

Figure 1
<p>Pipeline workflow diagram of a high-throughput phenotyping platform for predicting soybean physiological maturity (R8 stage) of three breeding experiments (2018–2020) containing trials divided into plant rows of F<sub>4:5</sub> experimental lines grown at the University of Illinois Research and Education Center near Savoy, IL. On the top right, overlapped on the satellite image, © Google, 2024 [<a href="#B31-remotesensing-16-04343" class="html-bibr">31</a>], three selected orthophotos corresponding to these experiments were taken from a drone on the same flight date (10 September). The colored polygons indicate the effective area of the soybean breeding blocks (trials) for which physiological maturity was predicted. The magnified orthophoto (10 September 2019) shows the cell grid that was used to associate the pixels within each cell to the day of the year in which the plant row reached the R8 stage.</p>
Full article ">Figure 2
<p>Partial visualization of composed orthophotos obtained from time series of images taken from a drone flying over three soybean breeding experiments (2018–2020). The experiments, containing plant rows of F<sub>4:5</sub> experimental lines, were grown at the University of Illinois Research and Education Center near Savoy, IL. The imagery was collected in a total of eight flight dates in 2018, ten in 2019, and nine in 2020, although only four flight dates per year are shown according to the best matching day of the year. The raster information within each cell grid was used to predict the day of the year the plant row reached physiological maturity. All the orthophotos show the three visual spectral bands (red, green, and blue); however, while the images were taken with a digital RGB camera in 2018, in 2019 and 2020, they were with a multispectral camera of five bands: red, green, blue, red edge, and near-infrared.</p>
Full article ">Figure 3
<p>The histograms (in green) show the distribution of soybean physiological maturity (R8 stage) dates for three experiments of plant rows of F<sub>4:5</sub> experimental lines grown at the University of Illinois Research and Education Center near Savoy, IL (2018–2020). The histograms (in blue) also show the distribution of the R8 stage dates, but according to what plant rows were assigned per individual (A–F) to take the field notes.</p>
Full article ">Figure 4
<p>The boxplots show the bias of predictions (days) for soybean physiological maturity (R8 stage) according to the individuals (A–F) who together took 9252, 11,742, and 11,197 field notes from three experiments: 2018 (<b>top</b>), 2019 (<b>middle</b>), and 2020 (<b>bottom</b>), respectively. The experiments contained plant rows of F<sub>4:5</sub> experimental lines grown at the University of Illinois Research and Education Center near Savoy, IL. The Random Forest algorithm was used to adjust the predictive models using different training data sizes according to what plant rows were assigned per individual (A–F). The empty boxplot spaces mean that 44.2%, 28.5%, and 27.2% of field notes, taken respectively by A, B, and C, were used to train the models in 2018. In 2019, the proportions were 21.2%, 37.9%, 11.1%, 12.8%, and 17.0% (A, D–G); and in 2020, they were 45.3%, 19.6%, 17.5%, and 17.7% (A, B and C, D, and E).</p>
Full article ">Figure 5
<p>Soybean physiological maturity (R8 stage) predictions corresponding to three breeding experiments containing plant rows of F<sub>4:5</sub> experimental lines grown at the University of Illinois Research and Education Center near Savoy, IL (2018–2020). The Random Forest algorithm was applied to associate the field recorded values with three classification variables (breeding block, the individual who took the field notes, and the check cultivar) and 32 image features (red, green, blue, and a calculated excess green index —<span class="html-italic">ExG</span>—) obtained from eight drone flights. (<b>a</b>–<b>c</b>) The relationship between predicted vs. field recorded values using all the field records, and (<b>d</b>–<b>f</b>) the same, but after filtering records of plant rows that reached the R8 stage after the last drone flight date (26, 24, and 30 September, respectively, for 2018, 2019, and 2020). An equal relationship training:test data ratio (80:20) was maintained for the three experiments (<span class="html-italic">n</span> = test data). The deviation of the regression line (blue) from the 1:1 line (gray) indicates the model’s prediction bias.</p>
Full article ">Figure 6
<p>Variable importance measure of 15 most relevant variables for predicting soybean physiological maturity (R8 stage) of three experiments containing plant rows of F<sub>4:5</sub> experimental lines grown at the University of Illinois Research and Education Center near Savoy, IL. Spectral bands extracted from time series of images taken from a drone and the excess green index (<span class="html-italic">ExG</span>) were included in the models as explanatory variables with three other classification variables: the breeding block (Block), the individual who took the field notes (Ind.), and the check cultivar (that does not show relevant importance). In 2018, the images were taken from a drone with a digital RGB (red, green, blue) camera, whereas in 2019 and 2020, they were taken with a multispectral camera. For the latter two years, the analyses were divided into using only the red (R), green (G), and blue (B) bands (simulating a digital RGB camera) and using the five spectral bands: R, G, B, R edge, and near-infrared (NIR).</p>
Full article ">Figure 7
<p>Principal component analysis (PCA) of 32 variables belonging to a time series of RGB (red, green, blue) images and a calculated excess green index (<span class="html-italic">ExG</span>). The images were taken across eight drone flights carried out over a soybean breeding experiment (planted on 22 May 2018) containing plant rows of F<sub>4:5</sub> experimental lines grown at the University of Illinois Research and Education Center near Savoy, IL. (<b>a</b>) Shows a regression analysis between PC1 scores and soybean physiological maturity (R8 stage); and (<b>b</b>) <span class="html-italic">a posteriori</span> association between the response variable (R8 stage) and the image features, where A and S indicate August and September 2018, respectively.</p>
Full article ">Figure 8
<p>Soybean physiological maturity (R8 stage) predictions for 2020 using four models trained with data from field recorded values collected from two previous experiments (2018–2019). The three experiments corresponded to breeding experiments containing plant rows of F<sub>4:5</sub> experimental lines grown at the University of Illinois Research and Education Center near Savoy, IL. The four models were adjusted by applying the Random Forest algorithm to associate the field recorded values with a time series of the excess green index (<span class="html-italic">ExG</span>) and three classification variables (breeding block, the individual who took the field notes, and the check cultivar). Calculated from the red, green, and blue spectral bands, <span class="html-italic">ExG</span> was obtained from digital images taken with a drone. The four models were adjusted using the following training: test data relationship: (<b>a</b>) Training 2019:Test 2020 (<span class="html-italic">n</span> = 51:49); (<b>b</b>) Training 2019<sub>plus 2020 checks</sub>:Test 2020<sub>wihout checks</sub> (<span class="html-italic">n</span> = 53:47); (<b>c</b>) Training 2018–2019: Test 2020 (<span class="html-italic">n</span> = 65:35); and (<b>d</b>) Training 2018–2019<sub>plus 2020 checks</sub>:Test 2020<sub>wihout checks</sub> (<span class="html-italic">n</span> = 67:33). The deviation of the regression line (blue) from the 1:1 line (gray) indicates the model’s prediction bias. The table below the figures gives the data used to train the models in each figure (<b>a</b>–<b>d</b>).</p>
Full article ">Figure 9
<p>(<b>a</b>) Frequencies, (<b>b</b>) residuals, and (<b>c</b>) images showing prediction deviations for soybean physiological maturity (R8 stage) collected in a breeding experiment with plant rows of F<sub>4:5</sub> experimental lines in 2020. The mean residual (red line) indicates in (<b>b</b>) the prediction bias across time compared to predictions with zero bias from the observed R8 dates (gray dashed line). The images on the right show the excess green index (<span class="html-italic">ExG</span>), which is calculated with the red, green, and blue bands (images on the left). On the top of (<b>c</b>), the images show the three worst maturity predictions identified on (<b>b</b>); the bottom shows three examples considering predictions with an error of 2, 1, and 0 days from 30 September. The maturity predictions were carried out using a model (<a href="#remotesensing-16-04343-f008" class="html-fig">Figure 8</a>b) trained with data collected in a breeding experiment planted in 2019 (<span class="html-italic">n</span> = 11,197) and in the eight check cultivars replicated in the 2020 experiment. The 2020 experiment minus the checks (<span class="html-italic">n</span> = 11,197–493) was used to test the model, which was adjusted with the Random Forest algorithm using time series of <span class="html-italic">ExG</span> and three classification variables (breeding block, the individual who took the field notes, and the check cultivar).</p>
Full article ">
22 pages, 1366 KiB  
Article
Mobility-Aware Task Offloading and Resource Allocation in UAV-Assisted Vehicular Edge Computing Networks
by Long Chen, Jiaqi Du and Xia Zhu
Drones 2024, 8(11), 696; https://doi.org/10.3390/drones8110696 - 20 Nov 2024
Viewed by 138
Abstract
The rapid development of the Internet of Vehicles (IoV) and intelligent transportation systems has led to increased demand for real-time data processing and computation in vehicular networks. To address these needs, this paper proposes a task offloading framework for UAV-assisted Vehicular Edge Computing [...] Read more.
The rapid development of the Internet of Vehicles (IoV) and intelligent transportation systems has led to increased demand for real-time data processing and computation in vehicular networks. To address these needs, this paper proposes a task offloading framework for UAV-assisted Vehicular Edge Computing (VEC) systems, which considers the high mobility of vehicles and the limited coverage and computational capacities of drones. We introduce the Mobility-Aware Vehicular Task Offloading (MAVTO) algorithm, designed to optimize task offloading decisions, manage resource allocation, and predict vehicle positions for seamless offloading. MAVTO leverages container-based virtualization for efficient computation, offering flexibility in resource allocation in multiple offload modes: direct, predictive, and hybrid. Extensive experiments using real-world vehicular data demonstrate that the MAVTO algorithm significantly outperforms other methods in terms of task completion success rate, especially under varying task data volumes and deadlines. Full article
(This article belongs to the Special Issue UAV-Assisted Intelligent Vehicular Networks 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Task offloading from bi-directions moving in UAV-assisted Vehicular Edge Computing.</p>
Full article ">Figure 2
<p>Direct offloading model.</p>
Full article ">Figure 3
<p>Prediction offloading model.</p>
Full article ">Figure 4
<p>Mixed offloading model.</p>
Full article ">Figure 5
<p>Example diagram for calculating the remaining travel distance of the vehicle.</p>
Full article ">Figure 6
<p>The performance of different task offloading sequences under a 95% Tukey HSD confidence interval.</p>
Full article ">Figure 7
<p>The performance of different task offloading strategies under a 95% Tukey HSD confidence interval.</p>
Full article ">Figure 8
<p>The performance of different resource allocation strategies under a 95% Tukey HSD confidence interval.</p>
Full article ">Figure 9
<p>Interaction plots of the compared algorithms for tests with different vehicle numbers and task data volume under 95.0% Tukey HSD confidence interval.</p>
Full article ">Figure 10
<p>Interaction plots of the compared algorithms for tests with different container numbers and task data volume intervals under 95.0% Tukey HSD confidence interval.</p>
Full article ">Figure 11
<p>Interaction plots of the compared algorithms for tests with different vehicle numbers and task deadlines under 95.0% Tukey HSD confidence interval.</p>
Full article ">
13 pages, 46604 KiB  
Article
Human Activity Recognition Based on Point Clouds from Millimeter-Wave Radar
by Seungchan Lim, Chaewoon Park, Seongjoo Lee and Yunho Jung
Appl. Sci. 2024, 14(22), 10764; https://doi.org/10.3390/app142210764 - 20 Nov 2024
Viewed by 221
Abstract
Human activity recognition (HAR) technology is related to human safety and convenience, making it crucial for it to infer human activity accurately. Furthermore, it must consume low power at all times when detecting human activity and be inexpensive to operate. For this purpose, [...] Read more.
Human activity recognition (HAR) technology is related to human safety and convenience, making it crucial for it to infer human activity accurately. Furthermore, it must consume low power at all times when detecting human activity and be inexpensive to operate. For this purpose, a low-power and lightweight design of the HAR system is essential. In this paper, we propose a low-power and lightweight HAR system using point-cloud data collected by radar. The proposed HAR system uses a pillar feature encoder that converts 3D point-cloud data into a 2D image and a classification network based on depth-wise separable convolution for lightweighting. The proposed classification network achieved an accuracy of 95.54%, with 25.77 M multiply–accumulate operations and 22.28 K network parameters implemented in a 32 bit floating-point format. This network achieved 94.79% accuracy with 4 bit quantization, which reduced memory usage to 12.5% compared to existing 32 bit format networks. In addition, we implemented a lightweight HAR system optimized for low-power design on a heterogeneous computing platform, a Zynq UltraScale+ ZCU104 device, through hardware–software implementation. It took 2.43 ms of execution time to perform one frame of HAR on the device and the system consumed 3.479 W of power when running. Full article
Show Figures

Figure 1

Figure 1
<p>Data collection setup.</p>
Full article ">Figure 2
<p>Configuration of dataset classes and their corresponding point clouds: (<b>a</b>) Stretching; (<b>b</b>) Standing; (<b>c</b>) Taking medicine; (<b>d</b>) Squatting; (<b>e</b>) Sitting chair; (<b>f</b>) Reading news; (<b>g</b>) Sitting floor; (<b>h</b>) Picking; (<b>i</b>) Crawl; (<b>j</b>) Lying wave hands; (<b>k</b>) Lying.</p>
Full article ">Figure 3
<p>Overview of the proposed HAR system.</p>
Full article ">Figure 4
<p>Proposed classification network.</p>
Full article ">Figure 5
<p>Training and test loss curve and accuracy curve: (<b>a</b>) Training and test loss curve; (<b>b</b>) Training and test accuracy curve.</p>
Full article ">Figure 6
<p>Confusion matrix.</p>
Full article ">Figure 7
<p>Environment used for FPGA implementation and verification.</p>
Full article ">
31 pages, 1889 KiB  
Article
Drone Swarm for Distributed Video Surveillance of Roads and Car Tracking
by David Sánchez Pedroche, Daniel Amigo, Jesús García, José M. Molina and Pablo Zubasti
Drones 2024, 8(11), 695; https://doi.org/10.3390/drones8110695 - 20 Nov 2024
Viewed by 234
Abstract
This study proposes a swarm-based Unmanned Aerial Vehicle (UAV) system designed for surveillance tasks, specifically for detecting and tracking ground vehicles. The proposal is to assess how a system consisting of multiple cooperating UAVs can enhance performance by utilizing fast detection algorithms. Within [...] Read more.
This study proposes a swarm-based Unmanned Aerial Vehicle (UAV) system designed for surveillance tasks, specifically for detecting and tracking ground vehicles. The proposal is to assess how a system consisting of multiple cooperating UAVs can enhance performance by utilizing fast detection algorithms. Within the study, the differences in one-stage and two-stage detection models have been considered, revealing that while two-stage models offer improved accuracy, their increased computation time renders them impractical for real-time applications. Consequently, faster one-stage models, such as the tested YOLOv8 architectures, appear to be a more viable option for real-time operations. Notably, the swarm-based approach enables these faster algorithms to achieve an accuracy level comparable to that of slower models. Overall, the experimentation analysis demonstrates how larger YOLO architectures exhibit longer processing times in exchange for superior tracking success rates. However, the inclusion of additional UAVs introduced in the system outweighed the choice of the tracking algorithm if the mission is correctly configured, thus demonstrating that the swarm-based approach facilitates the use of faster algorithms while maintaining performance levels comparable to slower alternatives. However, the perspectives provided by the included UAVs hold additional significance, as they are essential for achieving enhanced results. Full article
Show Figures

Figure 1

Figure 1
<p>Detection algorithm applied from different perspectives of the UAVs.</p>
Full article ">Figure 2
<p>Details of UAV mission.</p>
Full article ">Figure 3
<p>System overview.</p>
Full article ">Figure 4
<p>Detection and tracking process.</p>
Full article ">Figure 5
<p>Trajectory data-fusion process.</p>
Full article ">Figure 6
<p>Scenarios 1 and 2 example of crossing vehicles from the perspective of a single UAV.</p>
Full article ">Figure 7
<p>Scenario 4 example of a vehicle moving through the roundabout and changing the UAV perspective.</p>
Full article ">Figure 8
<p>Scenario 6 from the perspective of UAV 1. Each line represent the performed detections for each UAV from the presented perspective on each vehicle.</p>
Full article ">Figure 9
<p>Comparison of results of applying a segmentation algorithm.</p>
Full article ">Figure 10
<p>Example of different perspectives in a scenario with multiple vehicles.</p>
Full article ">Figure 11
<p>ATS heatmap for different scenarios and number of UAVs.</p>
Full article ">Figure 12
<p>Values comparison for YOLO and RT-DETR.</p>
Full article ">Figure 13
<p>Box-plot over experiments with 1, 3, and 5 UAVs for YOLO and RT_DETR.</p>
Full article ">
24 pages, 4039 KiB  
Review
A Review and Bibliometric Analysis of Unmanned Aerial System (UAS) Noise Studies Between 2015 and 2024
by Chuyang Yang, Ryan J. Wallace and Chenyu Huang
Acoustics 2024, 6(4), 997-1020; https://doi.org/10.3390/acoustics6040055 - 20 Nov 2024
Viewed by 347
Abstract
Unmanned aerial systems (UAS), commonly known as drones, have gained widespread use due to their affordability and versatility across various domains, including military, commercial, and recreational sectors. Applications such as remote sensing, aerial imaging, agriculture, firefighting, search and rescue, infrastructure inspection, and public [...] Read more.
Unmanned aerial systems (UAS), commonly known as drones, have gained widespread use due to their affordability and versatility across various domains, including military, commercial, and recreational sectors. Applications such as remote sensing, aerial imaging, agriculture, firefighting, search and rescue, infrastructure inspection, and public safety have extensively adopted this technology. However, environmental impacts, particularly noise, have raised concerns among the public and local communities. Unlike traditional crewed aircraft, drones typically operate in low-altitude airspace (below 400 feet or 122 m), making their noise impact more significant when they are closer to houses, people, and livestock. Numerous studies have explored methods for monitoring, assessing, and predicting the noise footprint of drones. This study employs a bibliometric analysis of relevant scholarly works in the Web of Science Core Collection, published from 2015 to 2024, following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) data collection and screening procedures. The International Journal of Environmental Research and Public Health, Aerospace Science and Technology, and the Journal of the Acoustical Society of America are the top three preferred outlets for publications in this area. This review unveils trends, topics, key authors and institutions, and national contributions in the field through co-authorship analysis, co-citation analysis, and other statistical methods. By addressing the identified challenges, leveraging emerging technologies, and fostering collaborations, the field can move towards more effective noise abatement strategies, ultimately contributing to the broader acceptance and sustainable integration of UASs into various aspects of society. Full article
(This article belongs to the Special Issue Vibration and Noise (2nd Edition))
Show Figures

Figure 1

Figure 1
<p>PRISMA flow diagram, adapted from [<a href="#B30-acoustics-06-00055" class="html-bibr">30</a>].</p>
Full article ">Figure 2
<p>Density visualization of co-authorship (fractional counting).</p>
Full article ">Figure 3
<p>Collaboration network by countries.</p>
Full article ">Figure 4
<p>Collaboration network by institutions.</p>
Full article ">Figure 5
<p>UAS noise problem framework.</p>
Full article ">Figure 6
<p>Author keyword cloud.</p>
Full article ">Figure 7
<p>A new (active) UAS noise monitoring, assessment, and prediction framework.</p>
Full article ">Figure A1
<p>Co-occurrence network of topics (Title and Abstract).</p>
Full article ">Figure A2
<p>Network visualization of co-citation (fractional counting).</p>
Full article ">
11 pages, 1803 KiB  
Review
Aerial Remote Sensing of Aquatic Microplastic Pollution: The State of the Science and How to Move It Forward
by Dominique Chabot and Sarah C. Marteinson
Microplastics 2024, 3(4), 685-695; https://doi.org/10.3390/microplastics3040042 - 20 Nov 2024
Viewed by 470
Abstract
Microplastics (MPs) are pervasive environmental contaminants in aquatic systems. Due to their small size, they can be ingested by aquatic biota, and numerous negative effects have been documented. Determining the risks to aquatic organisms is reliant on characterizing the environmental presence and concentrations [...] Read more.
Microplastics (MPs) are pervasive environmental contaminants in aquatic systems. Due to their small size, they can be ingested by aquatic biota, and numerous negative effects have been documented. Determining the risks to aquatic organisms is reliant on characterizing the environmental presence and concentrations of MPs, and developing efficient ways to do so over wide scales by means of aerial remote sensing would be beneficial. We conducted a systematic literature review to assess the state of the science of aerial remote sensing of aquatic MPs and propose further research steps to advance the field. Based on 28 key references, we outline three main approaches that currently remain largely experimental rather than operational: remote sensing of aquatic MPs based on (1) their spectral characteristics, (2) their reduction of water surface roughness, and (3) indirect proxies, notably other suspended water constituents. The first two approaches have the most potential for wide-scale monitoring, and the spectral detection of aquatic MPs is seemingly the most direct approach, with the fewest potential confounding factors. Whereas efforts to date have focused on inherently challenging detection in coarse-resolution satellite imagery, we suggest that better progress could be made by experimenting with image acquisition at much lower altitudes and finer spatial and spectral resolutions, which can be conveniently achieved using drones equipped with high-precision hyperspectral sensors. Beyond developing drone-based aquatic MP monitoring capabilities, such experiments could help with upscaling to satellite-based monitoring for global coverage. Full article
Show Figures

Figure 1

Figure 1
<p>Simplified visual summaries of the two primary approaches to aerial remote sensing of aquatic microplastics: (<b>left</b>) based on their spectral reflectance of solar radiation measured with passive optical sensors and (<b>right</b>) based on their reduction of water surface roughness measured with active radar sensors.</p>
Full article ">Figure 2
<p>Comparison of the reflected spectral signature of a marine-harvested mixture of microplastics (top illustration, from Garaba and Dierssen [<a href="#B20-microplastics-03-00042" class="html-bibr">20</a>]) to the spectral band distributions of currently operational (blue) and decommissioned (red) optical Earth observation satellites (bottom illustration, from Schmidt et al. [<a href="#B24-microplastics-03-00042" class="html-bibr">24</a>]); the orange dotted lines overlaying both illustrations indicate the most distinctive spectral features of microplastics that could potentially be detected by passive satellite-borne sensors, taking into account the atmospheric transmittance of solar radiation.</p>
Full article ">Figure 3
<p>When the water surface is calm, microwave radiation pulses emitted by satellite-borne synthetic aperture radar (SAR) experience mirror-like (i.e., ‘specular’) reflection from the surface. Since SAR pulses are emitted at an angle, they reflect away from the sensor, and the water surface therefore appears dark in the resulting image. Under moderate wind conditions, the formation of capillary waves results in a more uneven (or ‘rough’) water surface that reflects SAR pulses in multiple directions (i.e., ‘diffuse’ reflection), including back toward the sensor, causing the water surface to appear brighter. The presence of surfactants at the surface generated by microbial digestion of microplastics partially suppresses capillary waves, causing the water surface to be smoother than usual under moderate wind conditions, and therefore appear darker than expected in SAR imagery. In this Sentinel-1 SAR image of the North Atlantic Ocean from Davaasuren et al. [<a href="#B28-microplastics-03-00042" class="html-bibr">28</a>]—acquired under moderate wind conditions in an area of low chlorophyll-a concentrations (precluding algal bloom-generated surfactants)—it is hypothesized that the patterns of varying water surface brightness result from concentrations of microplastic-related surfactants in the darker areas, where they have caused relative smoothening of the surface.</p>
Full article ">
20 pages, 4297 KiB  
Article
Precision and Efficiency in Dam Crack Inspection: A Lightweight Object Detection Method Based on Joint Distillation for Unmanned Aerial Vehicles (UAVs)
by Hangcheng Dong, Nan Wang, Dongge Fu, Fupeng Wei, Guodong Liu and Bingguo Liu
Drones 2024, 8(11), 692; https://doi.org/10.3390/drones8110692 - 19 Nov 2024
Viewed by 264
Abstract
Dams in their natural environment will gradually develop cracks and other forms of damage. If not detected and repaired in time, the structural strength of the dam may be reduced, and it may even collapse. Repairing cracks and defects in dams is very [...] Read more.
Dams in their natural environment will gradually develop cracks and other forms of damage. If not detected and repaired in time, the structural strength of the dam may be reduced, and it may even collapse. Repairing cracks and defects in dams is very important to ensure their normal operation. Traditional detection methods rely on manual inspection, which consumes a lot of time and labor, while deep learning methods can greatly alleviate this problem. However, previous studies have often focused on how to better detect crack defects, with the corresponding image resolution not being particularly high. In this study, targeting the scenario of real-time detection by drones, we propose an automatic detection method for dam crack targets directly on high-resolution remote sensing images. First, for high-resolution remote sensing images, we designed a sliding window processing method and proposed corresponding methods to eliminate redundant detection frames. Then, we introduced a Gaussian distribution in the loss function to calculate the similarity of predicted frames and incorporated a self-attention mechanism in the spatial pooling module to further enhance the detection performance of crack targets at various scales. Finally, we proposed a pruning-after-distillation scheme, using the compressed model as the student and the pre-compression model as the teacher and proposed a joint distillation method that allows more efficient distillation under this compression relationship between teacher and student models. Ultimately, a high-performance target detection model can be deployed in a more lightweight form for field operations such as UAV patrols. Experimental results show that our method achieves an mAP of 80.4%, with a parameter count of only 0.725 M, providing strong support for future tasks such as UAV field inspections. Full article
(This article belongs to the Special Issue Advances in Detection, Security, and Communication for UAV)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the overall scheme of this work.</p>
Full article ">Figure 2
<p>Remote sensing images captured by unmanned aerial vehicles.</p>
Full article ">Figure 3
<p>Schematic diagram of overlapping sliding window cutting.</p>
Full article ">Figure 4
<p>The network composition of YOLO_v5.</p>
Full article ">Figure 5
<p>The structure of the feature extraction backbone in YOLO_v5.</p>
Full article ">Figure 6
<p>Dam cracks vary widely in shape and size.</p>
Full article ">Figure 7
<p>Targets of different sizes are inconsistently sensitive to IOU calculations.</p>
Full article ">Figure 8
<p>Decomposition of LKA convolutional structures.</p>
Full article ">Figure 9
<p>The convolutional structure of LSKA.</p>
Full article ">Figure 10
<p>Feature fusion module with the incorporation of LSKA module.</p>
Full article ">Figure 11
<p>The overall framework of the joint feature knowledge distillation algorithm.</p>
Full article ">Figure 12
<p>Distillation strategy based on output information.</p>
Full article ">Figure 13
<p>Distillation strategy based on feature maps.</p>
Full article ">Figure 14
<p>Detection results of original images of dam cracks.</p>
Full article ">Figure 15
<p>Comparison of detection results for cracks in dams. (<b>a</b>) Original image; (<b>b</b>) results of Yolov5n; (<b>c</b>) results of Yolov5n_ns.</p>
Full article ">Figure 16
<p>Comparison of results and heatmap from different distillation methods. (<b>a</b>) represents the model after pruning; (<b>b</b>) represents the model after local distillation based on feature maps; (<b>c</b>) represents the model with knowledge distillation based on output information; and (<b>d</b>) represents the model with multi-strategy joint distillation algorithm.</p>
Full article ">Figure 17
<p>Comparison of results and heatmap from different distillation methods on another image. (<b>a</b>) represents the model after pruning; (<b>b</b>) represents the model after local distillation based on feature maps; (<b>c</b>) represents the model with knowledge distillation based on output information; and (<b>d</b>) represents the model with multi-strategy joint distillation algorithm.</p>
Full article ">
26 pages, 10461 KiB  
Article
Accuracy and Precision of Shallow-Water Photogrammetry from the Sea Surface
by Elisa Casella, Giovanni Scicchitano and Alessio Rovere
Remote Sens. 2024, 16(22), 4321; https://doi.org/10.3390/rs16224321 - 19 Nov 2024
Viewed by 458
Abstract
Mapping shallow-water bathymetry and morphology represents a technical challenge. In fact, acoustic surveys are limited by water depths reachable by boat, and airborne surveys have high costs. Photogrammetric approaches (either via drone or from the sea surface) have opened up the possibility to [...] Read more.
Mapping shallow-water bathymetry and morphology represents a technical challenge. In fact, acoustic surveys are limited by water depths reachable by boat, and airborne surveys have high costs. Photogrammetric approaches (either via drone or from the sea surface) have opened up the possibility to perform shallow-water surveys easily and at accessible costs. This work presents a simple, low-cost, and highly portable platform that allows gathering sequential photos and echosounder depth values of shallow-water sites (up to 5 m depth). The photos are then analysed in conjunction with photogrammetric techniques to obtain digital bathymetric models and orthomosaics of the seafloor. The workflow was tested on four repeated surveys of the same area in the Western Mediterranean and allowed obtaining digital bathymetric models with centimetric average accuracy and precision and root mean square errors within a few decimetres. The platform presented in this work can be employed to obtain first-order bathymetric products, enabling the contextual establishment of the depth accuracy of the final products. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) Map of the Italian Peninsula. The star indicates the study site. Site where the test area (dashed line) is located as seen in: (<b>B</b>) orthomosaic of the area (Background image from Google Earth, 2022) and (<b>C</b>) oblique drone photo.</p>
Full article ">Figure 2
<p>Field setup used in this study. An operator working in snorkelling is dragging a diver’s buoy on top of which are fixed a dry case with a GNSS receiver (1) and a mobile phone (2). Fixed on the underwater part of the diver’s buoy are located a GoPro camera (3) and a portable echosounder (4). See text for details. The drawing is not to scale.</p>
Full article ">Figure 3
<p>Example of results obtained using the workflow outlined in the main text. (<b>A</b>) grid pattern followed by the snorkelling operator. (<b>B</b>) Orthomosaic (with hillshade in the background). (<b>C</b>) Digital bathymetric model (DBM) and echosounder points. Panels A, B, and C refer to the survey performed on the 13 August 2020. The same results for all surveys are shown in <a href="#remotesensing-16-04321-f0A2" class="html-fig">Figure A2</a>. (<b>D</b>–<b>G</b>) show an example of a picture for each survey date. The location pin (also shown in panel B) helps orient the image and place it in the reconstructed scene.</p>
Full article ">Figure 4
<p>Percentage of points and corresponding confidence calculated by Agisoft Metashape. Note that the surveys of 28 July and 13 August have higher confidence than the other two surveys, for which fewer photos were aligned by the program.</p>
Full article ">Figure 5
<p>Histograms showing the depth differences between DBM depths and control echosounder points (that represent the accuracy of each DBM), with average difference and RMSE for each survey date (panels <b>A</b>–<b>D</b>). For a plot of echosounder depths versus DBM depths, see <a href="#remotesensing-16-04321-f0A4" class="html-fig">Figure A4</a>.</p>
Full article ">Figure A1
<p>(<b>A</b>) Screenshot of the echosounder during data collection. The upper part shows the map location, while the lower part shows the sonogram surveyed by the echosounder. (<b>B</b>) Picture of the GNSS screen. This data is needed to syncronise the pictures taken with the GoPro camera with GNSS time.</p>
Full article ">Figure A2
<p>Same as in <a href="#remotesensing-16-04321-f003" class="html-fig">Figure 3</a>, but for all survey dates. The orthomosaics and DBMs shown here are not aligned to the 13 August one.</p>
Full article ">Figure A3
<p>Heatmap showing the RMSE between echosounder control points and DBM depths divided by survey date and depth bin. Darker blue colors represent higher RMSE.</p>
Full article ">Figure A4
<p>Scatterplots of DBM depths (x-axis) versus echosounder points depth (y-axis) for each survey date (panels <b>A</b>–<b>D</b>).</p>
Full article ">Figure A5
<p>Maps of the differences between DBMs from surveys performed on different dates.</p>
Full article ">Figure A6
<p>Histograms showing the differences between DBMs from surveys performed on different dates.</p>
Full article ">
18 pages, 9378 KiB  
Article
Multi-Rotor Drone-Based Thermal Target Tracking with Track Segment Association for Search and Rescue Missions
by Seokwon Yeom
Drones 2024, 8(11), 689; https://doi.org/10.3390/drones8110689 - 19 Nov 2024
Viewed by 369
Abstract
Multi-rotor drones have expanded their range of applications, one of which being search and rescue (SAR) missions using infrared thermal imaging. This paper addresses thermal target tracking with track segment association (TSA) for SAR missions. Three types of associations including TSA are developed [...] Read more.
Multi-rotor drones have expanded their range of applications, one of which being search and rescue (SAR) missions using infrared thermal imaging. This paper addresses thermal target tracking with track segment association (TSA) for SAR missions. Three types of associations including TSA are developed with an interacting multiple model (IMM) approach. During multiple-target tracking, tracks are initialized, maintained, and terminated. There are three different associations in track maintenance: measurement–track association, track–track association for tracks that exist at the same time (track association and fusion), and track–track association for tracks that exist at separate times (TSA). Measurement–track association selects the statistically nearest measurement and updates the track with the measurement through the IMM filter. Track association and fusion fuses redundant tracks for the same target that are spatially separated. TSA connects tracks that have become broken and separated over time. This process is accomplished through the selection of candidate track pairs, backward IMM filtering, association testing, and an assignment rule. In the experiments, a drone was equipped with an infrared thermal imaging camera, and two thermal videos were captured of three people in a non-visible environment. These three hikers were located close together and occluded by each other or other obstacles in the mountains. The drone was allowed to move arbitrarily. The tracking results were evaluated by the average total track life, average mean track life, and average track purity. The track segment association improved the average mean track life of each video by 99.8% and 250%, respectively Full article
Show Figures

Figure 1

Figure 1
<p>Framework of multiple target tracking following object detection.</p>
Full article ">Figure 2
<p>Block diagram of multiple-target tracking.</p>
Full article ">Figure 3
<p>Illustration of old and young candidate track pairs.</p>
Full article ">Figure 4
<p>Illustration of assignment between old and young tracks.</p>
Full article ">Figure 5
<p>Thermal frames of (<b>a</b>) Video 1 and (<b>b</b>) Video 2.</p>
Full article ">Figure 6
<p>YOLOv5x detection results of (<b>a</b>) Video 1 and (<b>b</b>) Video 2.</p>
Full article ">Figure 7
<p>Centroids of YOLOv5x detections for 1801 frames of (<b>a</b>) Video 1 and (<b>b</b>) Video 2.</p>
Full article ">Figure 8
<p>Video 1 tracks on the 1st frame background: (<b>a</b>) Case 1, (<b>b</b>) Case 2, (<b>c</b>) Case 3.</p>
Full article ">Figure 9
<p>Video 1 tracks on the white background: (<b>a</b>) Case 1, (<b>b</b>) Case 2, (<b>c</b>) Case 3.</p>
Full article ">Figure 10
<p>Video 2 tracks on the 1st frame background: (<b>a</b>) Case 1, (<b>b</b>) Case 2, (<b>c</b>) Case 3.</p>
Full article ">Figure 11
<p>Video 2 tracks on the white background: (<b>a</b>) Case 1, (<b>b</b>) Case 2, (<b>c</b>) Case 3.</p>
Full article ">Figure 12
<p>First track of Video 1: (<b>a</b>) 2 TSAs, (<b>b</b>) 1st TSA, (<b>c</b>) 2nd TSA.</p>
Full article ">Figure 13
<p>First track of Video 2: (<b>a</b>) 9 TSAs, (<b>b</b>) 1st, 2nd TSAs, (<b>c</b>) 3rd TSA, (<b>d</b>) 4th TSA, (<b>e</b>) 5th–8th TSAs, (<b>f</b>) 9th TSA, (x: measurement, o: forward estimation, □: backward update, △: backward prediction).</p>
Full article ">
15 pages, 596 KiB  
Article
DV-DETR: Improved UAV Aerial Small Target Detection Algorithm Based on RT-DETR
by Xiaolong Wei, Ling Yin, Liangliang Zhang and Fei Wu
Sensors 2024, 24(22), 7376; https://doi.org/10.3390/s24227376 - 19 Nov 2024
Viewed by 255
Abstract
For drone-based detection tasks, accurately identifying small-scale targets like people, bicycles, and pedestrians remains a key challenge. In this paper, we propose DV-DETR, an improved detection model based on the Real-Time Detection Transformer (RT-DETR), specifically optimized for small target detection in high-density scenes. [...] Read more.
For drone-based detection tasks, accurately identifying small-scale targets like people, bicycles, and pedestrians remains a key challenge. In this paper, we propose DV-DETR, an improved detection model based on the Real-Time Detection Transformer (RT-DETR), specifically optimized for small target detection in high-density scenes. To achieve this, we introduce three main enhancements: (1) ResNet18 as the backbone network to improve feature extraction and reduce model complexity; (2) the integration of recalibration attention units and deformable attention mechanisms in the neck network to enhance multi-scale feature fusion and improve localization accuracy; and (3) the use of the Focaler-IoU loss function to better handle the imbalanced distribution of target scales and focus on challenging samples. Experimental results on the VisDrone2019 dataset show that DV-DETR achieves an [email protected] of 50.1%, a 1.7% improvement over the baseline model, while increasing detection speed from 75 FPS to 90 FPS, meeting real-time processing requirements. These improvements not only enhance the model’s accuracy and efficiency but also provide practical significance in complex, high-density urban environments, supporting real-world applications in UAV-based surveillance and monitoring tasks. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>SBA module structure.</p>
Full article ">Figure 2
<p>RAU unit structure.</p>
Full article ">Figure 3
<p>RepConv module.</p>
Full article ">Figure 4
<p>Deformable attention structure.</p>
Full article ">Figure 5
<p>DV-DETR model structure.</p>
Full article ">Figure 6
<p>Curves in the loss function of the DV-DETR.</p>
Full article ">
30 pages, 929 KiB  
Review
Drones in Precision Agriculture: A Comprehensive Review of Applications, Technologies, and Challenges
by Ridha Guebsi, Sonia Mami and Karem Chokmani
Drones 2024, 8(11), 686; https://doi.org/10.3390/drones8110686 - 19 Nov 2024
Viewed by 518
Abstract
In the face of growing challenges in modern agriculture, such as climate change, sustainable resource management, and food security, drones are emerging as essential tools for transforming precision agriculture. This systematic review, based on an in-depth analysis of recent scientific literature (2020–2024), provides [...] Read more.
In the face of growing challenges in modern agriculture, such as climate change, sustainable resource management, and food security, drones are emerging as essential tools for transforming precision agriculture. This systematic review, based on an in-depth analysis of recent scientific literature (2020–2024), provides a comprehensive synthesis of current drone applications in the agricultural sector, primarily focusing on studies from this period while including a few notable exceptions of particular interest. Our study examines in detail the technological advancements in drone systems, including innovative aerial platforms, cutting-edge multispectral and hyperspectral sensors, and advanced navigation and communication systems. We analyze diagnostic applications, such as crop monitoring and multispectral mapping, as well as interventional applications like precision spraying and drone-assisted seeding. The integration of artificial intelligence and IoTs in analyzing drone-collected data is highlighted, demonstrating significant improvements in early disease detection, yield estimation, and irrigation management. Specific case studies illustrate the effectiveness of drones in various crops, from viticulture to cereal cultivation. Despite these advancements, we identify several obstacles to widespread drone adoption, including regulatory, technological, and socio-economic challenges. This study particularly emphasizes the need to harmonize regulations on beyond visual line of sight (BVLOS) flights and improve economic accessibility for small-scale farmers. This review also identifies key opportunities for future research, including the use of drone swarms, improved energy autonomy, and the development of more sophisticated decision-support systems integrating drone data. In conclusion, we underscore the transformative potential of drones as a key technology for more sustainable, productive, and resilient agriculture in the face of global challenges in the 21st century, while highlighting the need for an integrated approach combining technological innovation, adapted policies, and farmer training. Full article
(This article belongs to the Special Issue Advances of UAV in Precision Agriculture)
Show Figures

Figure 1

Figure 1
<p>PRISMA flow diagram for the selection of articles on the use of drones in agriculture.</p>
Full article ">Figure 2
<p>Block diagram of a drone system.</p>
Full article ">Figure 3
<p>Data workflow in precision agriculture: from drone acquisition to farmer decision support.</p>
Full article ">
26 pages, 1748 KiB  
Article
Sparse Online Gaussian Process Adaptive Control of Unmanned Aerial Vehicle with Slung Payload
by Muhammed Rasit Kartal, Dmitry I. Ignatyev and Argyrios Zolotas
Drones 2024, 8(11), 687; https://doi.org/10.3390/drones8110687 - 19 Nov 2024
Viewed by 284
Abstract
In the past decade, Unmanned Aerial Vehicles (UAVs) have garnered significant attention across diverse applications, including surveillance, cargo shipping, and agricultural spraying. Despite their widespread deployment, concerns about maintaining stability and safety, particularly when carrying payloads, persist. The development of such UAV platforms [...] Read more.
In the past decade, Unmanned Aerial Vehicles (UAVs) have garnered significant attention across diverse applications, including surveillance, cargo shipping, and agricultural spraying. Despite their widespread deployment, concerns about maintaining stability and safety, particularly when carrying payloads, persist. The development of such UAV platforms necessitates the implementation of robust control mechanisms to ensure stable and precise maneuvering capabilities. Numerous UAV operations require the integration of payloads, which introduces substantial stability challenges. Notably, operations involving unstable payloads such as liquid or slung payloads pose a considerable challenge in this regard, falling into the category of mismatched uncertain systems. This study focuses on establishing stability for slung payload-carrying systems. Our approach involves a combination of various algorithms: the incremental backstepping control algorithm (IBKS), integrator backstepping (IBS), Proportional–Integral–Derivative (PID), and the Sparse Online Gaussian Process (SOGP), a machine learning technique that identifies and mitigates disturbances. With a comparison of linear and nonlinear methodologies through different scenarios, an investigation for an effective solution has been performed. Implementation of the machine learning component, employing SOGP, effectively detects and counteracts disturbances. Insights are discussed within the remit of rejecting liquid sloshing disturbance. Full article
(This article belongs to the Special Issue Advances of UAV in Precision Agriculture)
Show Figures

Figure 1

Figure 1
<p>Pendulum and UAV frames.</p>
Full article ">Figure 2
<p>Proposed cascade control system diagram.</p>
Full article ">Figure 3
<p>Proportional–Integral–Derivative(PID) controller diagram.</p>
Full article ">Figure 4
<p>Integrator backstepping diagram.</p>
Full article ">Figure 5
<p>Incremental backstepping methodology diagram.</p>
Full article ">Figure 6
<p>Anti-windup command filter diagram.</p>
Full article ">Figure 7
<p>Proposed cascade control system diagram.</p>
Full article ">Figure 8
<p>Step signal command for position and controller performance comparison.</p>
Full article ">Figure 9
<p>Step signal command for position and controller performance comparison with payload.</p>
Full article ">Figure 10
<p>UAV spraying drone visual.</p>
Full article ">Figure 11
<p>Spraying operation from the top.</p>
Full article ">Figure 12
<p>Spraying operation from the corner angle.</p>
Full article ">Figure 13
<p>X, Y, Z position error values on simulation.</p>
Full article ">Figure 14
<p>X-Y-Z position, alpha angle, beta angle and weight change relation.</p>
Full article ">
Back to TopTop