Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

Topic Editors

Optical Communications Laboratory, Ocean College, Zhejiang University, Zheda Road 1, Zhoushan 316021, China
Network and Telecommunication Research Group, University of Haute-Alsace, 68008 Colmar, France
Department of Engineering, Manchester Metropolitan University, Manchester M15GD, UK
Hamdard Institute of Engineering & Technology, Islamabad 44000, Pakistan
Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou 510006, China

Advances, Innovations and Applications of UAV Technology for Remote Sensing

Abstract submission deadline
closed (30 May 2023)
Manuscript submission deadline
closed (31 August 2023)
Viewed by
55846

Topic Information

Dear Colleagues,

Nowadays, a variety of Unmanned Aerial Vehicles (UAVs) are commercially available and are widely used for several real-world tasks, such as environmental monitoring, construction site surveys, remote sensing data collection, vertical structure inspection, glaciology, smart agriculture, forestry, atmospheric research, disaster prevention, humanitarian observation, biological sensing, reef monitoring, fire monitoring, volcanic gas sampling, gap pipeline monitoring, hydrology, ecology, and archaeology. These less-invasive aerial robots support minimized user interventions, high-level autonomous functionalities, and can carry payloads for certain missions. UAV operations are highly effective in collecting valuable quantitative and qualitative data required to monitor isolated and distant regions. The integration of UAVs in such regions has substantially enhanced environmental monitoring by saving time, increasing precision, minimizing human footprints, increasing safety, and extending the study area of hard-to-access regions. Moreover, we have noticed a notable growth in emerging technologies such as artificial intelligence (AI), machine learning (ML), deep learning (DL), computer vision, the Internet-of-Things (IoT), laser scanning, sensing, oblique photogrammetry, aerial imaging, efficient perception, and 3D mapping, all of which assist UAVs in their multiple operations. These promising technologies can outperform human potentials in different sophisticated tasks such as medical image analysis, 3D mapping, aerial photography, and autonomous driving. Based on the aforementioned technologies, there exists an expanding interest to consider these cutting-edge technologies in order to enhance UAVs' level of autonomy and their other capabilities. Currently, we have also witnessed a tremendous growth in the use of UAVs for remotely sensing rural, urban, suburban, and remote regions. The extensive applicability and popularity of UAVs is not only strengthening the development of advanced UAV sensors, including RGB, LiDAR, laser scanners, thermal cameras, and hyperspectral and multispectral sensors, but it is also driving pragmatic and innovative problem-solving features and intelligent decision-making strategies in diverse domains. The integration of autonomous driving, collision avoidance, strong mobility for acquiring images at high temporal and spatial resolutions, environmental awareness, communication, precise control, dynamic data collection, 3D information acquisition, and intelligent algorithms also support UAV-based remote sensing technology for multiple applications. The growing advancements and innovations of UAVs as a remote sensing platform, as well as development in the miniaturization of instrumentation, have resulted in an expanding uptake of this technology in disciplines of remote sensing science.

This Topic aims to provide a novel and modern viewpoint on recent developments, novel patterns, and applications in the field. Our objective is to gather latest research contributions from both academicians and practitioners from diversified interests to fill the gap in the aforementioned research areas. We invite researchers to devote articles of high-quality scientific research to bridge the gap between theory, practice in design, and applications. We seek reviews, surveys, and original research articles on, but not limited to, the topics given below:

  • Real-time AI in motion planning, trajectory planning and control, data gathering and analysis of UAVs;
  • Image/LiDAR feature extraction;
  • Processing algorithms for UAV-aided imagery datasets;
  • Semantic/instance segmentation, classification, object detection and tracking with UAV data using the data mining, AI, ML, DL algorithms;
  • Cooperative perception and mapping utilizing UAV swarms;
  • UAV image/point clouds processing in power/oil/industry, hydraulics, agriculture, ecology, emergency response, and smart cities;
  • UAV-borne hyperspectral remote sensing;
  • Collaborative strategies and mechanisms between UAVs and other systems, such as hardware/software architectures including multi-agent systems, protocols, and strategies to work together;
  • UAV onboard remote sensing data storage, transmission, and retrieval;
  • Advances in the applications of UAVs in archeology, precision agriculture, yield protection, atmospheric research, area management, photogrammetry, 3D modeling, object reconstruction, Earth observation, climate change, sensing and imaging for coastal and environment monitoring, construction, mining, pollution monitoring, target tracking, humanitarian localization, security and surveillance, and ecological applications;
  • Use of optical laser, hyperspectral, multi-spectral, and SAR technologies for UAV-based remote sensing.

Dr. Syed Agha Hassnain Mohsan
Prof. Dr. Pascal Lorenz
Dr. Khaled Rabie
Dr. Muhammad Asghar Khan
Dr. Muhammad Shafiq
Topic Editors

Keywords

  • drones
  • UAVs
  • aerial robots
  • remote sensing
  • aerial imagery
  • LiDAR
  • machine learning
  • atmospheric research
  • sensing and Imaging
  • processing algorithms

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
AI
ai
3.1 7.2 2020 17.6 Days CHF 1600
Drones
drones
4.4 5.6 2017 21.7 Days CHF 2600
Inventions
inventions
2.1 4.8 2016 21.2 Days CHF 1800
Machine Learning and Knowledge Extraction
make
4.0 6.3 2019 27.1 Days CHF 1800
Remote Sensing
remotesensing
4.2 8.3 2009 24.7 Days CHF 2700
Sensors
sensors
3.4 7.3 2001 16.8 Days CHF 2600

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (23 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
9 pages, 957 KiB  
Editorial
Editorial on the Advances, Innovations and Applications of UAV Technology for Remote Sensing
by Syed Agha Hassnain Mohsan, Muhammad Asghar Khan and Yazeed Yasin Ghadi
Remote Sens. 2023, 15(21), 5087; https://doi.org/10.3390/rs15215087 - 24 Oct 2023
Cited by 2 | Viewed by 1658
Abstract
Currently, several kinds of Unmanned Aerial Vehicles (UAVs) or drones [...] Full article
Show Figures

Figure 1

Figure 1
<p>Application scenarios of UAVs.</p>
Full article ">Figure 2
<p>UAV-based remote sensing applications.</p>
Full article ">
18 pages, 13901 KiB  
Article
The Method of Multi-Angle Remote Sensing Observation Based on Unmanned Aerial Vehicles and the Validation of BRDF
by Hongtao Cao, Dongqin You, Dabin Ji, Xingfa Gu, Jianguang Wen, Jianjun Wu, Yong Li, Yongqiang Cao, Tiejun Cui and Hu Zhang
Remote Sens. 2023, 15(20), 5000; https://doi.org/10.3390/rs15205000 - 18 Oct 2023
Cited by 2 | Viewed by 5149
Abstract
The measurement of bidirectional reflectivity for ground-based objects is a highly intricate task, with significant limitations in the capabilities of both ground-based and satellite-based observations from multiple viewpoints. In recent years, unmanned aerial vehicles (UAVs) have emerged as a novel remote sensing method, [...] Read more.
The measurement of bidirectional reflectivity for ground-based objects is a highly intricate task, with significant limitations in the capabilities of both ground-based and satellite-based observations from multiple viewpoints. In recent years, unmanned aerial vehicles (UAVs) have emerged as a novel remote sensing method, offering convenience and cost-effectiveness while enabling multi-view observations. This study devised a polygonal flight path along the hemisphere to achieve bidirectional reflectance distribution function (BRDF) measurements for large zenith angles and all azimuth angles. By employing photogrammetry’s principle of aerial triangulation, accurate observation angles were restored, and the geometric structure of “sun-object-view” was constructed. Furthermore, three BRDF models (M_Walthall, RPV, RTLSR) were compared and evaluated at the UAV scale in terms of fitting quality, shape structure, and reflectance errors to assess their inversion performance. The results demonstrated that the RPV model exhibited superior inversion performance followed, by M_Walthall; however, RTLST performed comparatively poorly. Notably, the M_Walthall model excelled in capturing smooth terrain object characteristics while RPV proved applicable to various types of rough terrain objects with multi-scale applicability for both UAVs and satellites. These methods and findings are crucial for an extensive exploration into the bidirectional reflectivity properties of ground-based objects, and provide an essential technical procedure for studying various ground-based objects’ in-plane reflection properties. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of bidirectional reflection elements.</p>
Full article ">Figure 2
<p>Flow chart of photogrammetry technology.</p>
Full article ">Figure 3
<p>Zenith angle and azimuth angle of the observation beam of the camera.</p>
Full article ">Figure 4
<p>Position of the Sun: (<b>a</b>) solar altitude and azimuth in horizon coordinate system; (<b>b</b>) declination and time angle in equatorial coordinate system.</p>
Full article ">Figure 5
<p>Design diagram for multi angle view: (<b>a</b>) sampling of zenith angle; (<b>b</b>) sampling of azimuth angle.</p>
Full article ">Figure 6
<p>Unmanned aerial vehicles used for multi angle reflectivity measurement: (<b>a</b>) DJI P4M remote sensing system; (<b>b</b>) multispectral camera loaded on DJI P4M.</p>
Full article ">Figure 7
<p>The Lambertian reference panel (LRP) for reflectance correction: (<b>a</b>) actual image of LRP; (<b>b</b>) spectral reflectance curve of LRP.</p>
Full article ">Figure 8
<p>Radiation reference panels for reflectance correction: (<b>a</b>) actual images of RRP; (<b>b</b>) spectral reflectance curves of RRP.</p>
Full article ">Figure 9
<p>Position and view-angle of each image corrected. (<b>a</b>) The top view displays the position and viewing angle of each image. (<b>b</b>) The side view displays the position and viewing angle of each image.</p>
Full article ">Figure 10
<p>Four objects selected arrange in DOM.</p>
Full article ">Figure 11
<p>Spatial distribution of reflectance of four objects (ZA_V is the zenith angle of view, which ranges from 0 to 60 degrees; AA_V is the azimuth angle of view, which ranges from 0 to 360 degrees).</p>
Full article ">Figure 12
<p>Correlation of RRP01 inverted with M-Walthall, RPV, RTLSR.</p>
Full article ">Figure 13
<p>Correlation of RRP02 inverted with M-Walthall, RPV, RTLSR.</p>
Full article ">Figure 14
<p>Correlation of lawn inverted with M-Walthall, RPV, RTLSR.</p>
Full article ">Figure 15
<p>Correlation of soil inverted with M-Walthall, RPV, RTLSR.</p>
Full article ">Figure 16
<p>BRDF of RRP01 inverted with M-Walthall, RPV, RTLSR.</p>
Full article ">Figure 17
<p>BRDF of treetop inverted with M-Walthall, RPV, RTLSR.</p>
Full article ">Figure 18
<p>BRDF of lawn inverted with M-Walthall, RPV, RTLSR.</p>
Full article ">Figure 19
<p>BRDF of soil inverted with M-Walthall, RPV, RTLSR.</p>
Full article ">Figure 20
<p>BRDF profiles at 0–180 degrees.</p>
Full article ">Figure 21
<p>BRDF profiles at 90–270 degrees.</p>
Full article ">Figure 22
<p>Hot spot effect in the solar principal plane reproduced by three BRDF models.</p>
Full article ">Figure 23
<p>Error of mean reflectance and zenith reflectance.</p>
Full article ">
25 pages, 6253 KiB  
Article
Using Schlieren Imaging and a Radar Acoustic Sounding System for the Detection of Close-in Air Turbulence
by Samantha Gordon and Graham Brooker
Sensors 2023, 23(19), 8255; https://doi.org/10.3390/s23198255 - 5 Oct 2023
Viewed by 1388
Abstract
This paper presents a novel sensor for the detection and characterization of regions of air turbulence. As part of the ground truth process, it consists of a combined Schlieren imager and a Radar Acoustic Sounding System (RASS) to produce dual-modality “images” of air [...] Read more.
This paper presents a novel sensor for the detection and characterization of regions of air turbulence. As part of the ground truth process, it consists of a combined Schlieren imager and a Radar Acoustic Sounding System (RASS) to produce dual-modality “images” of air movement within the measurement volume. The ultrasound-modulated Schlieren imager consists of a strobed point light source, parabolic mirror, light block, and camera, which are controlled by two laptops. It provides a fine-scale projection of the acoustic pulse-modulated air turbulence through the measurement volume. The narrow beam 40 kHz/17 GHz RASS produces spectra based on Bragg-enhanced Doppler radar reflections from the acoustic pulse as it travels. Tests using artificially generated air vortices showed some disruption of the Schlieren image and of the RASS spectrogram. This should allow the higher-resolution Schlieren images to identify the turbulence mechanisms that are disrupting the RASS spectra. The objective of this combined sensor is to have the Schlieren component inform the interpretation of RASS spectra to allow the latter to be used as a stand-alone sensor on a UAV. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic of a typical Schlieren imager with the mirror on the left and the light source, light block and camera on the right.</p>
Full article ">Figure 2
<p>Schematic demonstrating the purpose of a light block with light rays that have not been perturbed (black) and light rays that have been refracted (blue). <b>Left</b>: light ray that would have passed the light block if not refracted is instead blocked. <b>Right</b>: light ray that would be blocked if not refracted instead passes.</p>
Full article ">Figure 3
<p>Relationship between deflection of electromagnetic wave, the SPL and the frequency of an acoustic wave being imaged with a Schlieren imager.</p>
Full article ">Figure 4
<p>Simulation showing the coherent sum of the reflected signals as a function of wavelength to clearly illustrate the Bragg condition at an electromagnetic wavelength of 16 mm.</p>
Full article ">Figure 5
<p>RASS geometry illustrating the effect of collocated sensors in which the reflected electromagnetic signal is focused back at the source to enhance the echo amplitude.</p>
Full article ">Figure 6
<p>Schematic of the integrated system shows the important components of the RASS, including the acoustic and collocated Doppler radar as well as those of the Schlieren imager, consisting of the point LED source, mirror and camera. Synchronized ultrasonic signals drive the acoustic and optical components of the two sensors.</p>
Full article ">Figure 7
<p>Schematic diagram of the Doppler radar and the incorporated reflected power canceller.</p>
Full article ">Figure 8
<p>Radar cross-section of acoustic pulse with the number of cycles, N, in a pulse as a parameter.</p>
Full article ">Figure 9
<p>Received signal and noise levels with the number of cycles, N, in a pulse as a parameter.</p>
Full article ">Figure 10
<p>Block diagram showing the components of the receiver chain and their respective gains.</p>
Full article ">Figure 11
<p>RASS used for turbulence experiments prior to integration with the Schlieren component of the system.</p>
Full article ">Figure 12
<p>Light source, light block, and Blackfly S camera are mounted on a wooden board, with their height adjusted using a scissor jack.</p>
Full article ">Figure 13
<p>A schematic diagram showing the general configuration of the RASS and the Schlieren Imager.</p>
Full article ">Figure 14
<p>The radar signal for the RASS is radiated from the green horn and reflected upwards by a fine-wire grid mounted at 45° to the beam, while the acoustic signal is radiated through the grid and upwards in front of the parabolic mirror. The camera and light source are off the photo on the left and are pointed towards the mirror.</p>
Full article ">Figure 15
<p>Fan placement in the darkroom. The fan is in the red box and the pipe with the internal vortex generator is in the blue box. To the left of the image are visible the mirror, the fine-wire grid and the end of the green horn.</p>
Full article ">Figure 16
<p>(<b>Left</b>): Acoustic waves from transducer imaged with Blackfly S camera. (<b>Right</b>): Acoustic waves with a heat gun on the coldest setting. The sound waves are traveling upwards. The acoustic waves above the heat gun’s plume are not as strong as the waves below.</p>
Full article ">Figure 17
<p>Top: EM signal received from RASS over the time period from the start of one acoustic burst to the next. Bottom: EM signal from RASS integrated over several acoustic bursts. The right panels show a short time span, displaying the sinusoidal nature of the received Doppler signals.</p>
Full article ">Figure 18
<p>Schlieren images for all three scenarios before and after post-processing. <b>Left</b>: Fan used for turbulence generation. <b>Right</b>: Leaf blower used for turbulence generation. A bright artefact was seen on the right of some images, which was cropped out before post-processing.</p>
Full article ">Figure 19
<p>Schlieren images and spectrograms from experiments with the fan as a turbulence generator. The left column shows results with no turbulence. Middle column shows results with the fan turned on but pointed away. Right column shows result with the fan turned on and pointed towards the RASS. <b>Top</b>: Schlieren images of a single burst. <b>Middle</b>: Spectrograms of RASS Doppler data for a single burst. <b>Bottom</b>: Spectrograms of RASS Doppler data integrated over several bursts. While the turbulence can be seen in the Schlieren images, there is no significant difference between the scenarios in the spectrograms.</p>
Full article ">Figure 20
<p>EM signal received from RASS during experiments using a fan to generate turbulence. The top plots show the time-domain response while the bottom row shows the EM signal in the frequency domain. The left plots are a single acoustic burst. The middle and right columns of plots show approximately 50 bursts integrated together. There are not any significant differences between the three scenarios with the fan turned off, the fan turned on and pointed away and the fan turned on and pointed in the path of the RASS.</p>
Full article ">Figure 21
<p>Schlieren images and spectrograms from experiments with leaf blower. The left column shows results with no turbulence. Middle column shows results with the leaf blower turned on but pointed away. Right column shows result with the leaf blower turned on and pointed towards the RASS. <b>Top</b>: Schlieren images of a single burst. <b>Middle</b>: Spectrograms of RASS Doppler data for a single burst. <b>Bottom</b>: Spectrograms of EM data from the RASS integrated over several bursts. In all of the rows, there is a clear difference between the three scenarios with the leaf blower.</p>
Full article ">Figure 22
<p>Time-domain plots of EM signal received from RASS during experiments using a leaf blower to generate turbulence. The top row shows a single burst, while the bottom row shows several bursts integrated. The left panels show the time-domain signal over the full time period considered, while the right panels show a smaller time period to emphasize the differences between the signals. The single-burst plots have too much noise to determine significant differences between the three scenarios. When the signals are integrated, the EM signal is weaker when the leaf blower is turned on and directed at the path of the RASS.</p>
Full article ">Figure 23
<p>Frequency domain plots of EM signal received from RASS during experiments using a leaf blower to generate turbulence. The top row shows a single burst, while the bottom row shows several bursts integrated. The left panels show the time-domain signal over a 100 kHz bandwidth, while the right panels show a smaller frequency range to emphasize the differences between the signals. Both single-burst and integrated plots show differences between the three scenarios. The 43 kHz peak is strongest when the leaf blower is turned off and is weakest when the leaf blower is turned on a directed at the path of the RASS.</p>
Full article ">Figure 24
<p>Sketch showing the proposed UAV based monopulse configuration.</p>
Full article ">Figure A1
<p>Overview of the system used and how it was divided into subsystems. Does not show devices used to generate turbulence. Mains power means the power outlets available in the lab were used.</p>
Full article ">Figure A2
<p>Connections between components in the Schlieren subsystem. Mains power connections used provided power adaptors for each device.</p>
Full article ">Figure A3
<p>Connections between components in the RASS subsystem. Mains power connections used provided power adaptors.</p>
Full article ">Figure A4
<p>Three examples of Schlieren images and spectrograms from a single acoustic burst. <b>Top</b>: scenario where there is no turbulence. <b>Middle</b>: Turbulence generator is turned on but pointed away. <b>Bottom</b>: turbulence generator is turned on and pointed towards the RASS. The signal at 48 kHz is weaker when the turbulence is on and pointed towards the RASS. Similarly, the acoustic waves are not as clearly seen when the turbulence is on.</p>
Full article ">
20 pages, 46373 KiB  
Article
HAM-Transformer: A Hybrid Adaptive Multi-Scaled Transformer Net for Remote Sensing in Complex Scenes
by Keying Ren, Xiaoyan Chen, Zichen Wang, Xiwen Liang, Zhihui Chen and Xia Miao
Remote Sens. 2023, 15(19), 4817; https://doi.org/10.3390/rs15194817 - 3 Oct 2023
Cited by 3 | Viewed by 1342
Abstract
The quality of remote sensing images has been greatly improved by the rapid improvement of unmanned aerial vehicles (UAVs), which has made it possible to detect small objects in the most complex scenes. Recently, learning-based object detection has been introduced and has gained [...] Read more.
The quality of remote sensing images has been greatly improved by the rapid improvement of unmanned aerial vehicles (UAVs), which has made it possible to detect small objects in the most complex scenes. Recently, learning-based object detection has been introduced and has gained popularity in remote sensing image processing. To improve the detection accuracy of small, weak objects in complex scenes, this work proposes a novel hybrid backbone composed of a convolutional neural network and an adaptive multi-scaled transformer, referred to as HAM-Transformer Net. HAM-Transformer Net firstly extracts the details of feature maps using convolutional local feature extraction blocks. Secondly, hierarchical information is extracted, using multi-scale location coding. Finally, an adaptive multi-scale transformer block is used to extract further features in different receptive fields and to fuse them adaptively. We implemented comparison experiments on a self-constructed dataset. The experiments proved that the method is a significant improvement over the state-of-the-art object detection algorithms. We also conducted a large number of comparative experiments in this work to demonstrate the effectiveness of this method. Full article
Show Figures

Figure 1

Figure 1
<p>The overall architecture of HAM-Transformer Net. It consists of the convolutional local feature extraction block (CLB), multi-scale position embedding block (MPE) and adaptive multi-scale transformer block (AMT).</p>
Full article ">Figure 2
<p>Comparison of different convolution-based blocks. BN denotes the batch normalization operation, and LN denotes the layer normalization operation. ReLU, ReLU6, and GELU denote several classical linear activation functions.</p>
Full article ">Figure 3
<p>The overall architecture of the SK-ViT. The operation includes grouping, selection, and fusion. It divides the self-attention heads into several groups and uses different compression coefficients to compress the number of input embedding blocks.</p>
Full article ">Figure 4
<p>The overall architecture of attention computation in a single branch. DWConv1 can merge neighboring embedding blocks, and DWConv2 can refine the final feature output of K.</p>
Full article ">Figure 5
<p>Comparison of different feed-forward networks. FC denotes fully connected layer.</p>
Full article ">Figure 6
<p>Object detection visualization with different algorithms.</p>
Full article ">Figure 6 Cont.
<p>Object detection visualization with different algorithms.</p>
Full article ">Figure 7
<p>Attention visualization of different structures. To more intuitively verify the effectiveness of HAM-Transformer-S, we used GradCAM to visualize heat maps of network output features.</p>
Full article ">Figure 8
<p>Visualization of object detection. We extracted some representative images from our dataset to demonstrate the performance of HAM-Transformer-S.</p>
Full article ">
19 pages, 12731 KiB  
Article
Enhancing UAV-SfM Photogrammetry for Terrain Modeling from the Perspective of Spatial Structure of Errors
by Wen Dai, Ruibo Qiu, Bo Wang, Wangda Lu, Guanghui Zheng, Solomon Obiri Yeboah Amankwah and Guojie Wang
Remote Sens. 2023, 15(17), 4305; https://doi.org/10.3390/rs15174305 - 31 Aug 2023
Cited by 1 | Viewed by 1368
Abstract
UAV-SfM photogrammetry is widely used in remote sensing and geoscience communities. Scholars have tried to optimize UAV-SfM for terrain modeling based on analysis of error statistics like root mean squared error (RMSE), mean error (ME), and standard deviation (STD). However, the errors of [...] Read more.
UAV-SfM photogrammetry is widely used in remote sensing and geoscience communities. Scholars have tried to optimize UAV-SfM for terrain modeling based on analysis of error statistics like root mean squared error (RMSE), mean error (ME), and standard deviation (STD). However, the errors of terrain modeling tend to be spatially distributed. Although the error statistic can represent the magnitude of errors, revealing spatial structures of errors is still challenging. The “best practice” of UAV-SfM is lacking in research communities from the perspective of spatial structure of errors. Thus, this study designed various UAV-SfM photogrammetric scenarios and investigated the effects of image collection strategies and GCPs on terrain modeling. The error maps of different photogrammetric scenarios were calculated and quantitatively analyzed by ME, STD, and Moran’s I. The results show that: (1) A high camera inclination (20–40°) enhances UAV-SfM photogrammetry. This not only decreases the magnitude of errors, but also mitigates its spatial correlation (Moran’s I). Supplementing convergent images is valuable for reducing errors in a nadir camera block, but it is unnecessary when the image block is with a high camera angle. (2) Flying height increases the magnitude of errors (ME and STD) but does not affect the spatial structure (Moran’s I). By contrast, the camera angle is more important than the flying height for improving the spatial structure of errors. (3) A small number of GCPs rapidly reduce the magnitude of errors (ME and STD), and a further increase in GCPs has a marginal effect. However, the structure of errors (Moran’s I) can be further improved with increasing GCPs. (4) With the same number, the distribution of GCPs is critical for UAV-SfM photogrammetry. The edge distribution should be first considered, followed by the even distribution. The research findings contribute to understanding how different image collection scenarios and GCPs can influence subsequent terrain modeling accuracy, precision, and spatial structure of errors. The latter (spatial structure of errors) should be routinely assessed in evaluations of the quality of UAV-SfM photogrammetry. Full article
Show Figures

Figure 1

Figure 1
<p>Workflow of the study.</p>
Full article ">Figure 2
<p>Schematics of the flight path, GCPs distribution, and the study area scope. (<b>a</b>) The T1 study area has 33 GCPs. (<b>b</b>) The T2 study area has 31 GCPs.</p>
Full article ">Figure 3
<p>Schematics of selecting GCPs in the process of terrain modeling.</p>
Full article ">Figure 4
<p>(<b>a</b>–<b>c</b>) are the error maps with different camera angles, STD and ME of error maps, and Moran’s I of error maps in the T1 area, respectively; (<b>d</b>–<b>f</b>) are the error maps with different camera angles, STD and ME of error maps, and Moran’s I of error maps in the T2 area, respectively.</p>
Full article ">Figure 5
<p>(<b>a</b>–<b>d</b>) are the error maps with different combinations of camera angles, STD, ME, and Moran’s I of the error maps in the T1 area, respectively; (<b>e</b>–<b>h</b>) are the error maps with different combinations of camera angles, STD, ME, and Moran’s I of the error maps in the T2 area, respectively.</p>
Full article ">Figure 6
<p>(<b>a</b>–<b>c</b>) are the error maps with different flying height, STD and ME of error maps, and Moran’s I of error maps in the T1 area, respectively; (<b>d</b>–<b>f</b>) are the error maps with different flying height, STD and ME of error maps, and Moran’s I of error maps in the T2 area, respectively.</p>
Full article ">Figure 7
<p>The errors with flying height combination.</p>
Full article ">Figure 8
<p>(<b>a</b>–<b>c</b>) are the error maps with different GCPs, STD and ME of error maps, and Moran’s I of error maps in the T1 area, respectively; (<b>d</b>–<b>f</b>) are the error maps with GCPs, STD and ME of error maps, and Moran’s I of error maps in the T2 area, respectively.</p>
Full article ">Figure 9
<p>ME, STD, and Moran’s I against indices of GCP distribution.</p>
Full article ">Figure 10
<p>Error maps under different layouts of GCP distribution.</p>
Full article ">
16 pages, 2398 KiB  
Article
Dynamic Repositioning of Aerial Base Stations for Enhanced User Experience in 5G and Beyond
by Shams Ur Rahman, Ajmal Khan, Muhammad Usman, Muhammad Bilal, You-Ze Cho and Hesham El-Sayed
Sensors 2023, 23(16), 7098; https://doi.org/10.3390/s23167098 - 11 Aug 2023
Cited by 1 | Viewed by 940
Abstract
The ultra-dense deployment (UDD) of small cells in 5G and beyond to enhance capacity and data rate is promising, but since user densities continually change, the static deployment of small cells can lead to wastes of capital, the underutilization of resources, and user [...] Read more.
The ultra-dense deployment (UDD) of small cells in 5G and beyond to enhance capacity and data rate is promising, but since user densities continually change, the static deployment of small cells can lead to wastes of capital, the underutilization of resources, and user dissatisfaction. This work proposes the use of Aerial Base Stations (ABSs) wherein small cells are mounted on Unmanned Aerial Vehicles (UAVs), which can be deployed to a set of candidate locations. Furthermore, based on the current user densities, this work studies the optimal placement of the ABSs, at a subset of potential candidate positions, to maximize the total received power and signal-to-interference ratio. The problems of the optimal placement for increasing received power and signal-to-interference ratio are formulated, and optimal placement solutions are designed. The proposed solutions compute the optimal candidate locations for the ABSs based on the current user densities. When the user densities change significantly, the proposed solutions can be re-executed to re-compute the optimal candidate locations for the ABSs, and hence the ABSs can be moved to their new candidate locations. Simulation results show that a 22% or more increase in the total received power can be achieved through the optimal placement of the Aerial BSs and that more than 60% users have more than 80% chance to have their individual received power increased. Full article
Show Figures

Figure 1

Figure 1
<p>Aerial Base Station-based ultra-dense mobile network.</p>
Full article ">Figure 2
<p>Flowchart indicating the steps the proposed solutions involve.</p>
Full article ">Figure 3
<p>Degradation in total received power due to mobility of users.</p>
Full article ">Figure 4
<p>Percent improvement in received power with respect to the number of Aerial BSs.</p>
Full article ">Figure 5
<p>PMF of the percentage of users receiving performance improvement when user positions were generated as Poisson cluster process.</p>
Full article ">Figure 6
<p>PMF of the percentage of users receiving performance improvement when user positions were generated as Poisson point process.</p>
Full article ">Figure 7
<p>CCDF for percentage of users receiving performance improvement.</p>
Full article ">Figure 8
<p>Effect of the number of candidate positions on improvement in received power.</p>
Full article ">Figure 9
<p>Percent improvement in SIR with respect to number of Aerial BSs.</p>
Full article ">Figure 10
<p>Effect of the number of candidate positions on improvement in SIR.</p>
Full article ">
20 pages, 12513 KiB  
Article
UAV-Based Terrain Modeling in Low-Vegetation Areas: A Framework Based on Multiscale Elevation Variation Coefficients
by Jiaxin Fan, Wen Dai, Bo Wang, Jingliang Li, Jiahui Yao and Kai Chen
Remote Sens. 2023, 15(14), 3569; https://doi.org/10.3390/rs15143569 - 16 Jul 2023
Cited by 4 | Viewed by 1626
Abstract
The removal of low vegetation is still challenging in UAV photogrammetry. According to the different topographic features expressed by point-cloud data at different scales, a vegetation-filtering method based on multiscale elevation-variation coefficients is proposed for terrain modeling. First, virtual grids are constructed at [...] Read more.
The removal of low vegetation is still challenging in UAV photogrammetry. According to the different topographic features expressed by point-cloud data at different scales, a vegetation-filtering method based on multiscale elevation-variation coefficients is proposed for terrain modeling. First, virtual grids are constructed at different scales, and the average elevation values of the corresponding point clouds are obtained. Second, the amount of elevation change at any two scales in each virtual grid is calculated to obtain the difference in surface characteristics (degree of elevation change) at the corresponding two scales. Third, the elevation variation coefficient of the virtual grid that corresponds to the largest elevation variation degree is calculated, and threshold segmentation is performed based on the relation that the elevation variation coefficients of vegetated regions are much larger than those of terrain regions. Finally, the optimal calculation neighborhood radius of the elevation variation coefficients is analyzed, and the optimal segmentation threshold is discussed. The experimental results show that the multiscale coefficients of elevation variation method can accurately remove vegetation points and reserve ground points in low- and densely vegetated areas. The type I error, type II error, and total error in the study areas range from 1.93 to 9.20%, 5.83 to 5.84%, and 2.28 to 7.68%, respectively. The total error of the proposed method is 2.43–2.54% lower than that of the CSF, TIN, and PMF algorithms in the study areas. This study provides a foundation for the rapid establishment of high-precision DEMs based on UAV photogrammetry. Full article
Show Figures

Figure 1

Figure 1
<p>Workflow of the multiscale elevation-variation coefficient algorithm.</p>
Full article ">Figure 2
<p>Schematic diagram of the virtual grid: (<b>a</b>) three-dimensional representation of regular VGs; (<b>b</b>) horizontal projection of a multiscale virtual grid.</p>
Full article ">Figure 3
<p>Schematic diagram of topographic models.</p>
Full article ">Figure 4
<p>Schematic diagram of the neighborhood radius: (<b>a</b>) VG statistical window of 1-grid neighborhood radius; (<b>b</b>) VG statistical window of 2-grid neighborhood radius.</p>
Full article ">Figure 5
<p>Orthophoto and reference cloud points for T1: (<b>a</b>) orthophoto of T1 from the UAVs; (<b>b</b>) reference cloud points for T1 from the UAVs.</p>
Full article ">Figure 6
<p>Orthophoto and reference cloud points for T2: (<b>a</b>) orthophoto of T2 from the UAVs; (<b>b</b>) reference cloud points for T2 from the UAVs.</p>
Full article ">Figure 7
<p>Schematic diagram of the VG at different scales: (<b>a</b>) diagram of the VG at the 0.1 m scale for T1; (<b>b</b>) diagram of the VG at the 0.4 m scale for T1; (<b>c</b>) diagram of the VG at the 1.6 m scale for T1; (<b>d</b>) diagram of the VG at the 6.4 m scale for T1; (<b>e</b>) diagram of the VG at the 0.1 m scale for T2; (<b>f</b>) diagram of the VG at the 0.4 m scale for T2; (<b>g</b>) diagram of the VG at the 1.6 m scale for T2; (<b>h</b>) diagram of the VG at the 6.4 m scale for T2.</p>
Full article ">Figure 8
<p>Results of elevation variation for VGs at different scales: (<b>a</b>) elevation variation for a 3.2–1.6 m VG of T1; (<b>b</b>) elevation variation for a 3.2–0.2 m VG of T1; (<b>c</b>) elevation variation for a 1.6–0.8 m VG of T1; (<b>d</b>) elevation variation for a 1.6–0.2 m VG of T1; (<b>e</b>) elevation variation for a 0.8–0.4 m VG of T1; (<b>f</b>) elevation variation for a 0.8–0.2 m VG of T1; (<b>g</b>) elevation variation for a 3.2–1.6 m VG of T2; (<b>h</b>) elevation variation for a 3.2–0.2 m VG of T2; (<b>i</b>) elevation variation for a 1.6–0.8 m VG of T2; (<b>j</b>) elevation variation for a 1.6–0.2 m VG of T2; (<b>k</b>) elevation variation for a 0.8–0.4 m VG of T2; (<b>l</b>) elevation variation for a 0.8–0.2 m VG of T2.</p>
Full article ">Figure 9
<p>Filtering error comparison at different scales: (<b>a</b>) error comparison for T1; (<b>b</b>) error comparison for T2.</p>
Full article ">Figure 10
<p>Multiscale EVC results. (<b>a</b>) EVC for a 1-VG neighborhood radius of T1; (<b>b</b>) EVC for a 3-VG neighborhood radius of T1; (<b>c</b>) EVC for a 6-VG neighborhood radius of T1; (<b>d</b>) EVC for a 1-VG neighborhood radius of T2; (<b>e</b>) EVC for a 3-VG neighborhood radius of T2; (<b>f</b>) EVC for a 6-VG neighborhood radius of T2.</p>
Full article ">Figure 11
<p>Filtering error comparison at different radii: (<b>a</b>) error comparison for T1; (<b>b</b>) error comparison for T2.</p>
Full article ">Figure 12
<p>Filtering error comparison at different thresholds: (<b>a</b>) error comparison for T1; (<b>b</b>) error comparison for T2.</p>
Full article ">Figure 13
<p>Results for T1: (<b>a</b>) the point cloud after vegetation removal with our method for T1; (<b>b</b>) the point cloud after vegetation removal with the CSF method for T1; (<b>c</b>) the point cloud after vegetation removal with the TIN method for T1; (<b>d</b>) the point cloud after vegetation removal with the PMF method for T1; (<b>e</b>) the DSM of T1; (<b>f</b>) the DEM obtained by interpolating the point-cloud data after vegetation removal with our method for T1.</p>
Full article ">Figure 14
<p>Results for T2. (<b>a</b>) The point cloud after vegetation removal with our method for T2; (<b>b</b>) the point cloud after vegetation removal with the CSF method for T2; (<b>c</b>) the point cloud after vegetation removal with the TIN method for T2; (<b>d</b>) the point cloud after vegetation removal with the PMF method for T2; (<b>e</b>) the DSM for T2; (<b>f</b>) the DEM obtained by interpolating the point-cloud data after vegetation removal with our method for T2.</p>
Full article ">
14 pages, 22675 KiB  
Article
Crack Detection of Bridge Concrete Components Based on Large-Scene Images Using an Unmanned Aerial Vehicle
by Zhen Xu, Yingwang Wang, Xintian Hao and Jingjing Fan
Sensors 2023, 23(14), 6271; https://doi.org/10.3390/s23146271 - 10 Jul 2023
Cited by 6 | Viewed by 1554
Abstract
The current method of crack detection in bridges using unmanned aerial vehicles (UAVs) relies heavily on acquiring local images of bridge concrete components, making image acquisition inefficient. To address this, we propose a crack detection method that utilizes large-scene images acquired by a [...] Read more.
The current method of crack detection in bridges using unmanned aerial vehicles (UAVs) relies heavily on acquiring local images of bridge concrete components, making image acquisition inefficient. To address this, we propose a crack detection method that utilizes large-scene images acquired by a UAV. First, our approach involves designing a UAV-based scheme for acquiring large-scene images of bridges, followed by processing these images using a background denoising algorithm. Subsequently, we use a maximum crack width calculation algorithm that is based on the region of interest and the maximum inscribed circle. Finally, we applied the method to a typical reinforced concrete bridge. The results show that the large-scene images are only 1/9–1/22 of the local images for this bridge, which significantly improves detection efficiency. Moreover, the accuracy of the crack detection can reach up to 93.4%. Full article
Show Figures

Figure 1

Figure 1
<p>Technical framework.</p>
Full article ">Figure 2
<p>Large-scene image acquisition strategy using UAVs.</p>
Full article ">Figure 3
<p>Mathematic model of aerial photography.</p>
Full article ">Figure 4
<p>Workflow for background denoising.</p>
Full article ">Figure 5
<p>Background denoising of large-scene images of concrete bridges. (<b>a</b>) Grid segmentation of large-scene images. (<b>b</b>) Part of a concrete bridge member after background denoising.</p>
Full article ">Figure 6
<p>The PR curve of crack detection model.</p>
Full article ">Figure 7
<p>The process of ROI cropping.</p>
Full article ">Figure 8
<p>Flowchart for calculating maximum crack width.</p>
Full article ">Figure 9
<p>Visual annotation of maximum crack width.</p>
Full article ">Figure 10
<p>UAV-based large-scene images (<b>a</b>) Original images. (<b>b</b>) Background denoising results.</p>
Full article ">Figure 11
<p>Crack detection results.</p>
Full article ">Figure 12
<p>Visual annotation of maximum crack width in small-size sub-images.</p>
Full article ">
21 pages, 7952 KiB  
Article
Research of an Unmanned Aerial Vehicle Autonomous Aerial Refueling Docking Method Based on Binocular Vision
by Kun Gong, Bo Liu, Xin Xu, Yuelei Xu, Yakun He, Zhaoxiang Zhang and Jarhinbek Rasol
Drones 2023, 7(7), 433; https://doi.org/10.3390/drones7070433 - 30 Jun 2023
Cited by 1 | Viewed by 1837
Abstract
In this paper, a visual navigation method based on binocular vision and a deep learning approach is proposed to solve the navigation problem of the unmanned aerial vehicle autonomous aerial refueling docking process. First, to meet the requirements of high accuracy and high [...] Read more.
In this paper, a visual navigation method based on binocular vision and a deep learning approach is proposed to solve the navigation problem of the unmanned aerial vehicle autonomous aerial refueling docking process. First, to meet the requirements of high accuracy and high frame rate in aerial refueling tasks, this paper proposes a single-stage lightweight drogue detection model, which greatly increases the inference speed of binocular images by introducing image alignment and depth-separable convolution and improves the feature extraction capability and scale adaptation performance of the model by using an efficient attention mechanism (ECA) and adaptive spatial feature fusion method (ASFF). Second, this paper proposes a novel method for estimating the pose of the drogue by spatial geometric modeling using optical markers, and further improves the accuracy and robustness of the algorithm by using visual reprojection. Moreover, this paper constructs a visual navigation vision simulation and semi-physical simulation experiments for the autonomous aerial refueling task, and the experimental results show the following: (1) the proposed drogue detection model has high accuracy and real-time performance, with a mean average precision (mAP) of 98.23% and a detection speed of 41.11 FPS in the embedded module; (2) the position estimation error of the proposed visual navigation algorithm is less than ±0.1 m, and the attitude estimation error of the pitch and yaw angle is less than ±0.5°; and (3) through comparison experiments with the existing advanced methods, the positioning accuracy of this method is improved by 1.18% compared with the current advanced methods. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of visual navigation coordinate system conversion.</p>
Full article ">Figure 2
<p>Marking types of optical-marker-assisted vision scheme for AAR with hose. (<b>a</b>) LED lights [<a href="#B27-drones-07-00433" class="html-bibr">27</a>], (<b>b</b>) color markers [<a href="#B28-drones-07-00433" class="html-bibr">28</a>], (<b>c</b>) band marker [<a href="#B29-drones-07-00433" class="html-bibr">29</a>].</p>
Full article ">Figure 3
<p>Framework of AAR visual navigation algorithm for UAVs.</p>
Full article ">Figure 4
<p>Structure of single-stage lightweight drogue object detection network.</p>
Full article ">Figure 5
<p>Feature extraction and matching algorithm based on optically assisted markers.</p>
Full article ">Figure 6
<p>Stereo parallel binocular imaging schematic.</p>
Full article ">Figure 7
<p>Schematic diagram of the structure of binocular vision-based drogue pose estimation method.</p>
Full article ">Figure 8
<p>Schematic diagram of the visual reprojection method based on binocular vision.</p>
Full article ">Figure 9
<p>Air refueling data simulation experiment platform.</p>
Full article ">Figure 10
<p>Size and location distribution of drogue targets in the dataset.</p>
Full article ">Figure 11
<p>mAP and algorithm speed comparison chart of different methods.</p>
Full article ">Figure 12
<p>Drogue detection results under the cloud.</p>
Full article ">Figure 13
<p>Drogue detection results under low-light conditions.</p>
Full article ">Figure 14
<p>Ground drogue test results.</p>
Full article ">Figure 15
<p>Schematic diagram of the composition of the aerial refueling visual navigation simulation software system for drones.</p>
Full article ">Figure 16
<p>(<b>a</b>–<b>c</b>) show the error curves of the drogue coordinate system relative to the camera coordinate system <span class="html-italic">X</span>, <span class="html-italic">Y</span>, and <span class="html-italic">Z</span> axes. (<b>d</b>) show the error curves of yaw and pitch angle. The vertical axis represents the error, and the horizontal axis represents the sequence number of input images.</p>
Full article ">Figure 17
<p>(<b>a</b>–<b>c</b>) represent the comparison curves between the estimated results and the ground truth of the <span class="html-italic">X</span>, <span class="html-italic">Y</span>, and <span class="html-italic">Z</span> axes’ output by three algorithms during the UAV-ARR simulation process, respectively. The vertical axis represents the relative distance on the axis, and the horizontal axis represents the sequence number of input images.</p>
Full article ">Figure 18
<p>Error curves of the three algorithms on <span class="html-italic">X</span>, <span class="html-italic">Y</span>, and <span class="html-italic">Z</span> axes, respectively, during the UAV-ARR simulation process. The vertical axis represents error on the axis, and the horizontal axis represents the sequence number of input images.</p>
Full article ">Figure 19
<p>Position measurement error comparison chart in semi-physical simulation experiment. ((<b>a</b>) shows the comparison chart of different methods with the true value. (<b>b</b>–<b>d</b>) show the comparison chart of the error of different methods of the drogue coordinate system relative to the camera coordinate system <span class="html-italic">X</span>, <span class="html-italic">Y</span>, and <span class="html-italic">Z</span> axes. The vertical axis represents error on the axis, and the horizontal axis represents the relative distance between the camera and the drogue).</p>
Full article ">
19 pages, 87282 KiB  
Article
A Drone-Powered Deep Learning Methodology for High Precision Remote Sensing in California’s Coastal Shrubs
by Jon Detka, Hayley Coyle, Marcella Gomez and Gregory S. Gilbert
Drones 2023, 7(7), 421; https://doi.org/10.3390/drones7070421 - 25 Jun 2023
Cited by 6 | Viewed by 2526
Abstract
Wildland conservation efforts require accurate maps of plant species distribution across large spatial scales. High-resolution species mapping is difficult in diverse, dense plant communities, where extensive ground-based surveys are labor-intensive and risk damaging sensitive flora. High-resolution satellite imagery is available at scales needed [...] Read more.
Wildland conservation efforts require accurate maps of plant species distribution across large spatial scales. High-resolution species mapping is difficult in diverse, dense plant communities, where extensive ground-based surveys are labor-intensive and risk damaging sensitive flora. High-resolution satellite imagery is available at scales needed for plant community conservation across large areas, but can be cost prohibitive and lack resolution to identify species. Deep learning analysis of drone-based imagery can aid in accurate classification of plant species in these communities across large regions. This study assessed whether drone-based imagery and deep learning modeling approaches could be used to map species in complex chaparral, coastal sage scrub, and oak woodland communities. We tested the effectiveness of random forest, support vector machine, and convolutional neural network (CNN) coupled with object-based image analysis (OBIA) for mapping in diverse shrublands. Our CNN + OBIA approach outperformed random forest and support vector machine methods to accurately identify tree and shrub species, vegetation gaps, and communities, even distinguishing two congeneric shrub species with similar morphological characteristics. Similar accuracies were attained when applied to neighboring sites. This work is key to the accurate species identification and large scale mapping needed for conservation research and monitoring in chaparral and other wildland plant communities. Uncertainty in model application is associated with less common species and intermixed canopies. Full article
Show Figures

Figure 1

Figure 1
<p>Map of research sites (yellow border)—UCSC Fort Ord Natural Reserve (black border). (<b>a</b>) Application Site 1 (8.42 ha), (<b>b</b>) Training Site (16 ha), (<b>c</b>) Application Site 2 (8.84 ha), (<b>d</b>) Application Site 3 (7.44 ha). Backdrop imagery source: World Imagery Esri, Maxar, Earthstar Geographics (2022). Research site imagery is displayed as RGB orthomosaic from UAV research flights.</p>
Full article ">Figure 2
<p>Manzanita dominated maritime chaparral (<b>left</b>) and coastal sage scrub transitioning to oak woodland (<b>right</b>). UCSC Fort Ord Natural Reserve, California.</p>
Full article ">Figure 3
<p>Orthomosiac raster displayed in RGB. (<b>a</b>) Application Site 1 (7.43 ha), (<b>b</b>) Training Site (16 ha), (<b>c</b>) Application Site 2 (9 ha), (<b>d</b>) Application Site 3 (8 ha). Resolution: 2.5 cm/pixel with WGS 84 UTM Zone 10 projection. Backdrop imagery source: World Imagery Esri, Maxar, Earthstar Geographics (2022).</p>
Full article ">Figure 4
<p>Normalized Digital Surface Model (nDSM) estimating canopy height (m). (<b>a</b>) Application Site 1 (7.43 ha), (<b>b</b>) Training Site (16 ha), (<b>c</b>) Application Site 2 (9 ha), (<b>d</b>) Application Site 3 (8 ha). Resolution: 5 cm/pixel (WGS 84 UTM Zone 10 projection).</p>
Full article ">Figure 5
<p>Slope model (degrees) generated from normalized digital surface model (nDSM) canopy height (WGS 84 UTM Zone 10 projection). (<b>a</b>) Application Site 1 (7.43 ha), (<b>b</b>) Training Site (16 ha), (<b>c</b>) Application Site 2 (9 ha), (<b>d</b>) Application Site 3 (8 ha).</p>
Full article ">Figure 6
<p>CNN generated probability heat maps for each feature class in the 16-ha training site. Red regions have a high probability (<span class="html-italic">p</span> = 1) of membership to the class and blue represents a null probability (<span class="html-italic">p</span> = 0) of membership to the feature class.</p>
Full article ">Figure 7
<p>CNN + OBIA land cover classification results (WGS 84 UTM Zone 10 projection). (<b>a</b>) Application Site 1 (7.43 ha), (<b>b</b>) Training Site (16 ha), (<b>c</b>) Application Site 2 (9 ha), (<b>d</b>) Application Site 3 (8 ha).</p>
Full article ">Figure 8
<p>Accuracy Assessment of CNN + OBIA deep learning landscape cover classification.</p>
Full article ">Figure A1
<p>Random Forest classification results (WGS 84 UTM Zone 10 projection). (<b>a</b>) Application Site 1 (7.43 ha), (<b>b</b>) Training Site (16 ha), (<b>c</b>) Application Site 2 (9 ha), (<b>d</b>) Application Site 3 (8 ha).</p>
Full article ">Figure A2
<p>Accuracy assessment of random forest landscape cover classification.</p>
Full article ">Figure A3
<p>Support Vector Machine classification results (WGS 84 UTM Zone 10 projection). (<b>a</b>) Application Site 1 (7.43 ha), (<b>b</b>) Training Site (16 ha), (<b>c</b>) Application Site 2 (9 ha), (<b>d</b>) Application Site 3 (8 ha).</p>
Full article ">Figure A4
<p>Accuracy assessment of support vector machine landscape cover classification.</p>
Full article ">
22 pages, 3674 KiB  
Article
A UAV-Assisted Stackelberg Game Model for Securing loMT Healthcare Networks
by Jamshed Ali Shaikh, Chengliang Wang, Muhammad Asghar Khan, Syed Agha Hassnain Mohsan, Saif Ullah, Samia Allaoua Chelloug, Mohammed Saleh Ali Muthanna and Ammar Muthanna
Drones 2023, 7(7), 415; https://doi.org/10.3390/drones7070415 - 23 Jun 2023
Cited by 4 | Viewed by 1625
Abstract
On the one hand, the Internet of Medical Things (IoMT) in healthcare systems has emerged as a promising technology to monitor patients’ health and provide reliable medical services, especially in remote and underserved areas. On the other hand, in disaster scenarios, the loss [...] Read more.
On the one hand, the Internet of Medical Things (IoMT) in healthcare systems has emerged as a promising technology to monitor patients’ health and provide reliable medical services, especially in remote and underserved areas. On the other hand, in disaster scenarios, the loss of communication infrastructure can make it challenging to establish reliable communication and to provide timely first aid services. To address this challenge, unmanned aerial vehicles (UAVs) have been adopted to assist hospital centers in delivering medical care to hard-to-reach areas. Despite the potential of UAVs to improve medical services in emergency scenarios, their limited resources make their security critical. Therefore, developing secure and efficient communication protocols for IoMT networks using UAVs is a vital research area that can help ensure reliable and timely medical services. In this paper, we introduce a novel Stackelberg security-based game theory algorithm, named Stackelberg ad hoc on-demand distance vector (SBAODV), to detect and recover data affected by black hole attacks in IoMT networks using UAVs. Our proposed scheme utilizes the substantial Stackelberg equilibrium (SSE) to formulate strategies that protect the system against attacks. We evaluate the performance of our proposed SBAODV scheme and compare it with existing routing schemes. Our results demonstrate that our proposed scheme outperforms existing schemes regarding packet delivery ratio (PDR), networking load, throughput, detection ratio, and end-to-end delay. Specifically, our proposed SBAODV protocol achieves a PDR of 97%, throughput ranging from 77.7 kbps to 87.3 kbps, and up to 95% malicious detection rate at the highest number of nodes. Furthermore, our proposed SBADOV scheme offers significantly lower networking load (7% to 30%) and end-to-end delay (up to 30%) compared to existing routing schemes. These results demonstrate the efficiency and effectiveness of our proposed scheme in ensuring reliable and secure communication in IoMT emergency scenarios using UAVs. Full article
Show Figures

Figure 1

Figure 1
<p>Taxonomy of IoMT.</p>
Full article ">Figure 2
<p>IoMT-assisted UAVs network architecture.</p>
Full article ">Figure 3
<p>Routing strategy with a black hole in UAV network scenario.</p>
Full article ">Figure 4
<p>Working principle of medical healthcare networking assisted UAV in emergency scenarios. Figure(<b>a</b>) UAV assisted IoMT network, Figure(<b>b</b>) black hole attack and game theoretic strategies of defender and attacker and Figure(<b>c</b>) UAV defends its resources and block the black hole attacker.</p>
Full article ">Figure 5
<p>Experimental scenario in NS2 environment.</p>
Full article ">Figure 6
<p>Packet delivery ratio % vs. no of nodes.</p>
Full article ">Figure 7
<p>Network routing load vs. no of nodes.</p>
Full article ">Figure 8
<p>Throughput (kbps) vs. no of nodes.</p>
Full article ">Figure 9
<p>Detection ratio vs. no of nodes.</p>
Full article ">Figure 10
<p>End-to-end delay (ms) vs. no of nodes.</p>
Full article ">
24 pages, 2642 KiB  
Article
Influence of On-Site Camera Calibration with Sub-Block of Images on the Accuracy of Spatial Data Obtained by PPK-Based UAS Photogrammetry
by Kalima Pitombeira and Edson Mitishita
Remote Sens. 2023, 15(12), 3126; https://doi.org/10.3390/rs15123126 - 15 Jun 2023
Viewed by 1284
Abstract
Unmanned Aerial Systems (UAS) Photogrammetry has become widely used for spatial data acquisition. Nowadays, RTK (Real Time Kinematic) and PPK (Post Processed Kinematic) are the main correction methods for accurate positioning used for direct measurements of camera station coordinates in UAS imagery. Thus, [...] Read more.
Unmanned Aerial Systems (UAS) Photogrammetry has become widely used for spatial data acquisition. Nowadays, RTK (Real Time Kinematic) and PPK (Post Processed Kinematic) are the main correction methods for accurate positioning used for direct measurements of camera station coordinates in UAS imagery. Thus, 3D camera coordinates are commonly used as additional observations in Bundle Block Adjustment to perform Global Navigation Satellite System-Assisted Aerial Triangulation (GNSS-AAT). This process requires accurate Interior Orientation Parameters to ensure the quality of photogrammetric intersection. Therefore, this study investigates the influence of on-site camera calibration with a sub-block of images on the accuracy of spatial data obtained by PPK-based UAS Photogrammetry. For this purpose, experiments of on-the-job camera self-calibration in the Metashape software with the SfM approach were performed. Afterward, experiments of GNSS-Assisted Aerial Triangulation with on-site calibration in the Erdas Imagine software were performed. The outcomes show that only the experiment of GNSS-AAT with three Ground Control Points yielded horizontal and vertical accuracies close to nominal precisions of the camera station positions by GNSS-PPK measurements adopted in this study, showing horizontal RMSE (Root-Mean Square Error) of 0.222 m and vertical RMSE of 0.154 m. Furthermore, the on-site camera calibration with a sub-block of images significantly improved the vertical accuracy of the spatial information extraction. Full article
Show Figures

Figure 1

Figure 1
<p>Representation of the experimental setup and methodology employed in the on-the-job calibration and GNSS-AAT experiments.</p>
Full article ">Figure 2
<p>GCPs and checkpoints configurations. (<b>a</b>) Photogrammetric Block with only 12 checkpoints. (<b>b</b>) Photogrammetric Block with only one GCP and 11 checkpoints. (<b>c</b>) Photogrammetric Block with three GCP and 9 checkpoints.</p>
Full article ">Figure 3
<p>Sub-block location and GCPs distribution.</p>
Full article ">Figure 4
<p>Checkpoint discrepancies in self-calibrations (<b>a</b>) Horizontal discrepancies obtained in camera self-calibration without GCP; (<b>b</b>) Vertical discrepancies obtained in camera self-calibration without GCP; (<b>c</b>) Horizontal discrepancies obtained in camera self-calibration with one GCP; (<b>d</b>) Vertical discrepancies obtained in camera self-calibration with one GCP; (<b>e</b>) Horizontal discrepancies obtained in camera self-calibration with three GCPs; (<b>f</b>) Vertical discrepancies obtained in camera self-calibration with three GCP.</p>
Full article ">Figure 5
<p>GCP and tie points distribution in the sub-block. (<b>a</b>) Ground Control Points measured manually; (<b>b</b>) Tie points measured automatically.</p>
Full article ">Figure 6
<p>Checkpoints discrepancies in GNSS-Assisted Aerial Triangulation. (<b>a</b>) Horizontal discrepancies obtained in GNSS-AAT without GCP. (<b>b</b>) Vertical discrepancies obtained in GNSS-AAT without GCP. (<b>c</b>) Horizontal discrepancies obtained in GNSS-AAT with one GCP. (<b>d</b>) Vertical discrepancies obtained in GNSS-AAT with one GCP. (<b>e</b>) Horizontal discrepancies obtained in GNSS-AAT with three GCPs. (<b>f</b>) Vertical discrepancies obtained in GNSS-AAT with three GCP.</p>
Full article ">
22 pages, 6807 KiB  
Article
IRSDD-YOLOv5: Focusing on the Infrared Detection of Small Drones
by Shudong Yuan, Bei Sun, Zhen Zuo, Honghe Huang, Peng Wu, Can Li, Zhaoyang Dang and Zongqing Zhao
Drones 2023, 7(6), 393; https://doi.org/10.3390/drones7060393 - 14 Jun 2023
Cited by 5 | Viewed by 2532
Abstract
With the rapid growth of the global drone market, a variety of small drones have posed a certain threat to public safety. Therefore, we need to detect small drones in a timely manner so as to take effective countermeasures. At present, the method [...] Read more.
With the rapid growth of the global drone market, a variety of small drones have posed a certain threat to public safety. Therefore, we need to detect small drones in a timely manner so as to take effective countermeasures. At present, the method based on deep learning has made a great breakthrough in the field of target detection, but it is not good at detecting small drones. In order to solve the above problems, we proposed the IRSDD-YOLOv5 model, which is based on the current advanced detector YOLOv5. Firstly, in the feature extraction stage, we designed an infrared small target detection module (IRSTDM) suitable for the infrared recognition of small drones, which extracted and retained the target details to allow IRSDD-YOLOv5 to effectively detect small targets. Secondly, in the target prediction stage, we used the small target prediction head (PH) to complete the prediction of the prior information output via the infrared small target detection module (IRSTDM). We optimized the loss function by calculating the distance between the true box and the predicted box to improve the detection performance of the algorithm. In addition, we constructed a single-frame infrared drone detection dataset (SIDD), annotated at pixel level, and published an SIDD dataset publicly. According to some real scenes of drone invasion, we divided four scenes in the dataset: the city, sky, mountain and sea. We used mainstream instance segmentation algorithms (Blendmask, BoxInst, etc.) to train and evaluate the performances of the four parts of the dataset, respectively. The experimental results show that the proposed algorithm demonstrates good performance. The AP50 measurements of IRSDD-YOLOv5 in the mountain scene and ocean scene reached peak values of 79.8% and 93.4%, respectively, which are increases of 3.8% and 4% compared with YOLOv5. We also made a theoretical analysis of the detection accuracy of different scenarios in the dataset. Full article
Show Figures

Figure 1

Figure 1
<p>Example image of the infrared small drone. The drone is represented by a red border and enlarged in the lower right corner.</p>
Full article ">Figure 2
<p>Examples of SIDD dataset, from top to bottom: city scene, mountain scene, sponge scene and sky background. The drone targets in the images are marked with red circles.</p>
Full article ">Figure 3
<p>Segmentation process of IRSDD-YOLOv5 network.</p>
Full article ">Figure 4
<p>Overall architecture of IRSDD-YOLOv5. A PANet-like structure is used in the neck network, and the red part is the infrared small drone detection module added to the neck network. The four prediction heads use the feature maps generated from the neck network to fuse information about the targets. In addition, the number of each module is marked with an orange number on the left side of the module.</p>
Full article ">Figure 5
<p>The specific structure of C3 and SPPF modules.</p>
Full article ">Figure 6
<p>The top part of the figure shows how IRSTDM works, and the bottom part shows the detailed part of the module.</p>
Full article ">Figure 7
<p>Example of a procedure for calculating the NWD between two boxes.</p>
Full article ">Figure 8
<p>Visualization of target detection results. The leftmost is the input map, and from left to right is the mask map of target results of mainstream segmentation methods. The target area marked by red circle is enlarged in the upper right corner. GT represents the real area region of the target mask.</p>
Full article ">Figure 9
<p>Three-dimensional visualization qualitative results of different instance segmentation methods. From left to right is the input original image, the segmentation result of BLendmask, BoxInst, CondInst, Maskrcnn. From top to bottom are the city scene, mountain scene, sea scene and sky scene.</p>
Full article ">Figure 10
<p>Three-dimensional visualization qualitative results of different instance segmentation methods. From left to right is the input original image, the segmentation results of Yolact++, YOLOv5, IRSDD-YOLOV5. The real area of the target (GT). From top to bottom are the city scene, mountain scene, sea scene and sky scene.</p>
Full article ">Figure 11
<p>Infrared drone images in different scenes compared with different three-dimensional images.</p>
Full article ">Figure 12
<p>The process of stitching four images containing a single object into one image containing four objects.</p>
Full article ">Figure 13
<p>An example of narrowing the detection area by introducing prior information from radar. The yellow sector indicates the general location of the drone.</p>
Full article ">
21 pages, 656 KiB  
Article
A Cognitive Electronic Jamming Decision-Making Method Based on Q-Learning and Ant Colony Fusion Algorithm
by Chudi Zhang, Yunqi Song, Rundong Jiang, Jun Hu and Shiyou Xu
Remote Sens. 2023, 15(12), 3108; https://doi.org/10.3390/rs15123108 - 14 Jun 2023
Cited by 5 | Viewed by 2079
Abstract
In order to improve the efficiency and adaptability of cognitive radar jamming decision-making, a fusion algorithm (Ant-QL) based on ant colony and Q-Learning is proposed in this paper. The algorithm does not rely on a priori information and enhances adaptability through [...] Read more.
In order to improve the efficiency and adaptability of cognitive radar jamming decision-making, a fusion algorithm (Ant-QL) based on ant colony and Q-Learning is proposed in this paper. The algorithm does not rely on a priori information and enhances adaptability through real-time interactions between the jammer and the target radar. At the same time, it can be applied to single jammer and multiple jammer countermeasure scenarios with high jamming effects. First, traditional Q-Learning and DQN algorithms are discussed, and a radar jamming decision-making model is built for the simulation verification of each algorithm. Then, an improved Q-Learning algorithm is proposed to address the shortcomings of both algorithms. By introducing the pheromone mechanism of ant colony algorithms in Q-Learning and using the ε-greedy algorithm to balance the contradictory relationship between exploration and exploitation, the algorithm greatly avoids falling into a local optimum, thus accelerating the convergence speed of the algorithm with good stability and robustness in the convergence process. In order to better adapt to the cluster countermeasure environment in future battlefields, the algorithm and model are extended to cluster cooperative jamming decision-making. We map each jammer in the cluster to an intelligent ant searching for the optimal path, and multiple jammers interact with each other to obtain information. During the process of confrontation, the method greatly improves the convergence speed and stability and reduces the need for hardware and power resources of the jammer. Assuming that the number of jammers is three, the experimental simulation results of the convergence speed of the Ant-QL algorithm improve by 85.4%, 80.56% and 72% compared with the Q-Learning, DQN and improved Q-Learning algorithms, respectively. During the convergence process, the Ant-QL algorithm is very stable and efficient, and the algorithm complexity is low. After the algorithms converge, the average response times of the four algorithms are 6.99 × 10−4 s, 2.234 × 10−3 s, 2.21 × 10−4 s and 1.7 × 10−4 s, respectively. The results show that the improved Q-Learning algorithm and Ant-QL algorithm also have more advantages in terms of average response time after convergence. Full article
Show Figures

Figure 1

Figure 1
<p>The development path of CEW.</p>
Full article ">Figure 2
<p>Architecture of traditional radar countermeasure.</p>
Full article ">Figure 3
<p>Typical OODA.</p>
Full article ">Figure 4
<p>Cognitive electronic jamming decision-making model.</p>
Full article ">Figure 5
<p>Multifunctional radar signal model.</p>
Full article ">Figure 6
<p>Mapping of RL to radar jamming decision-making.</p>
Full article ">Figure 7
<p>Structure of Q-table and pheromone table. (<b>a</b>) Structure of the Q table; (<b>b</b>) structure of the pheromone table.</p>
Full article ">Figure 8
<p><span class="html-italic">Ant-QL</span> algorithm flow chart.</p>
Full article ">Figure 9
<p>Convergence process of <span class="html-italic">Q-Learning</span> algorithm. (<b>a</b>) Convergence process of counts; (<b>b</b>) convergence process of rewards.</p>
Full article ">Figure 10
<p>Convergence process of <span class="html-italic">DQN</span> algorithm. (<b>a</b>) Convergence process of counts; (<b>b</b>) convergence process of rewards.</p>
Full article ">Figure 11
<p>Convergence process of improved <span class="html-italic">Q-Learning</span>. (<b>a</b>) Convergence process of counts; (<b>b</b>) convergence process of rewards.</p>
Full article ">Figure 12
<p>Convergence process of <span class="html-italic">Ant-QL</span>. (<b>a</b>) Convergence process of counts; (<b>b</b>) convergence process of rewards.</p>
Full article ">
22 pages, 14906 KiB  
Article
UAV-Based Low Altitude Remote Sensing for Concrete Bridge Multi-Category Damage Automatic Detection System
by Han Liang, Seong-Cheol Lee and Suyoung Seo
Drones 2023, 7(6), 386; https://doi.org/10.3390/drones7060386 - 8 Jun 2023
Cited by 6 | Viewed by 1816
Abstract
Detecting damage in bridges can be an arduous task, fraught with challenges stemming from the limitations of the inspection environment and the considerable time and resources required for manual acquisition. Moreover, prevalent damage detection methods rely heavily on pixel-level segmentation, rendering it infeasible [...] Read more.
Detecting damage in bridges can be an arduous task, fraught with challenges stemming from the limitations of the inspection environment and the considerable time and resources required for manual acquisition. Moreover, prevalent damage detection methods rely heavily on pixel-level segmentation, rendering it infeasible to classify and locate different damage types accurately. To address these issues, the present study proposes a novel fully automated concrete bridge damage detection system that harnesses the power of unmanned aerial vehicle (UAV) remote sensing technology. The proposed system employs a Swin Transformer-based backbone network, coupled with a multi-scale attention pyramid network featuring a lightweight residual global attention network (LRGA-Net), culminating in unprecedented breakthroughs in terms of speed and accuracy. Comparative analyses reveal that the proposed system outperforms commonly used target detection models, including the YOLOv5-L and YOLOX-L models. The proposed system’s robustness in visual inspection results in the real world reinforces its efficacy, ushering in a new paradigm for bridge inspection and maintenance. The study findings underscore the potential of UAV-based inspection as a means of bolstering the efficiency and accuracy of bridge damage detection, highlighting its pivotal role in ensuring the safety and longevity of vital infrastructure. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of the proposed UAV-based low-altitude remote sensing system for detecting multiple types of damage to bridges.</p>
Full article ">Figure 2
<p>The bridge damage detection network is applied to UAV inspection systems.</p>
Full article ">Figure 3
<p>The illustration of (<b>a</b>) W-MSA operation and (<b>b</b>) SW-MSA operation.</p>
Full article ">Figure 4
<p>The Swin Transformer encoder forms the backbone network and comprises the W-MSA, SW-MSA, LN, and MLP connected in series.</p>
Full article ">Figure 5
<p>Sample of PE module processing input images.</p>
Full article ">Figure 6
<p>Sample of PM module providing downsampling details for a channel.</p>
Full article ">Figure 7
<p>The LRCA-Net’s spatial attention module restricted by the convolutional kernel size, which limits the perceptual field to local feature information only.</p>
Full article ">Figure 8
<p>Local dependencies are captured through convolutions (indicated in yellow), while long-distance connections (indicated in red) capture global dependencies.</p>
Full article ">Figure 9
<p>The modified spatial attention module.</p>
Full article ">Figure 10
<p>The overall architecture of LRGA-Net.</p>
Full article ">Figure 11
<p>Normalized height and width clusters of bounding boxes obtained through K-means clustering.</p>
Full article ">Figure 12
<p>The experimental site and inspection path. (<b>a</b>) Location of the Kyungdae Bridge test site; (<b>b</b>) DJI Avata UAV used for the inspection; (<b>c</b>) the inspection path taken by the UAV; (<b>d</b>) some samples of specific areas inspected by the UAV.</p>
Full article ">Figure 13
<p>Example of the attitude of the UAV while hovering for inspection.</p>
Full article ">Figure 14
<p>Five categories of defects in datasets: (<b>a</b>) crack, (<b>b</b>) spallation, (<b>c</b>) efflorescence, (<b>d</b>) exposed bars, and (<b>e</b>) corrosion.</p>
Full article ">Figure 15
<p>The number and percentage of defects in each category.</p>
Full article ">Figure 16
<p>Comparison of backbone networks: (<b>a</b>) convergence state of loss function, and (<b>b</b>) mAP curves.</p>
Full article ">Figure 17
<p>Three output sizes of the visual heat map where more highlighted areas indicate higher attention weights given by the network. (<b>a</b>) No attention module and (<b>b</b>) LRGA-Net.</p>
Full article ">Figure 18
<p>Precision × recall curves for each damage category: (<b>a</b>) corrosion stain, (<b>b</b>) crack, (<b>c</b>) efflorescence, (<b>d</b>) exposed bars, (<b>e</b>) spallation.</p>
Full article ">Figure 19
<p>Comparison of comprehensive performance of models: (<b>a</b>) parameters (M) vs. mAP, (<b>b</b>) parameters (M) vs. mF1.</p>
Full article ">Figure 20
<p>Comparison of the computational efficiency of different models.</p>
Full article ">Figure 21
<p>Sample results of the actual field output of different models. (<b>a</b>) Original image, (<b>b</b>) our approach, (<b>c</b>) YOLOX, (<b>d</b>) Faster-RCNN, (<b>e</b>) SSD.</p>
Full article ">Figure 22
<p>Samples of error detection (indicated by red dashed lines) and omission detection (indicated by orange dashed lines) using our proposed model.</p>
Full article ">
15 pages, 5430 KiB  
Article
Measurements of the Thickness and Area of Thick Oil Slicks Using Ultrasonic and Image Processing Methods
by Hualong Du, Huijie Fan, Qifeng Zhang and Shuo Li
Remote Sens. 2023, 15(12), 2977; https://doi.org/10.3390/rs15122977 - 7 Jun 2023
Cited by 2 | Viewed by 1700
Abstract
The in situ measurement of thick oil slick thickness (>0.5 mm) and area in real time in order to estimate the volume of an oil spill is very important for determining the oil spill response strategy and evaluating the oil spill disposal efficiency. [...] Read more.
The in situ measurement of thick oil slick thickness (>0.5 mm) and area in real time in order to estimate the volume of an oil spill is very important for determining the oil spill response strategy and evaluating the oil spill disposal efficiency. In this article, a method is proposed to assess the volume of oil slicks by simultaneously measuring the thick oil slick thickness and area using ultrasonic inspection and image processing methods, respectively. A remotely operated vehicle (ROV), integrating two ultrasonic immersion transducers, was implemented as a platform to receive ultrasonic reflections from an oil slick. The oil slick thickness was determined by multiplying the speed of sound by the ultrasonic traveling time within the oil slick, which was calculated using the cross-correlation method. Images of the oil slick were captured by an optical camera using an airborne drone. The oil slick area was calculated by conducting image processing on images of the oil slick using the proposed image processing algorithms. Multiple measurements were performed to verify the proposed method in the laboratory experiments. The results show that the thickness, area and volume of a thick oil slick can be accurately measured with the proposed method. The method could potentially be used as an applicable tool for measuring the volume of an oil slick during an oil spill response. Full article
Show Figures

Figure 1

Figure 1
<p>The schematic of ultrasonic reflections from an oil slick.</p>
Full article ">Figure 2
<p>Schematic of capturing images of oil slick area using a drone: (<b>a</b>) real drone acquisition scenario; (<b>b</b>) coordinate map.</p>
Full article ">Figure 3
<p>The schematic of the oil area measurement with the image processing method.</p>
Full article ">Figure 4
<p>Schematic of measuring oil slick volume using the ROV and drone platforms.</p>
Full article ">Figure 5
<p>(<b>a</b>) The ROV carrying two 5 MHz ultrasonic transducers dived in the tank before adding oil, (<b>b</b>) the oil slick boomed in the water tank, (<b>c</b>) the six-rotor drone platform with an optical camera used for capturing images.</p>
Full article ">Figure 6
<p>(<b>a</b>) the image of ultrasonic signals, (<b>b</b>) the ultrasonic signal captured at around ~200 s. The green and red gates were used to extract the reflections from the top and bottom of the oil slick, respectively.</p>
Full article ">Figure 7
<p>(<b>a</b>) the image of the cross-correlation of two signals, (<b>b</b>) the representative cross-correlation of two signals (blue) at the time of ~200th second and a fit curve (red).</p>
Full article ">Figure 8
<p>The original RGB image of the experimental site with a water tank.</p>
Full article ">Figure 9
<p>The water tank cropped from the image (<b>left</b>), the detection of the oil slick and the calculation of the oil slick area (<b>right</b>).</p>
Full article ">Figure 10
<p>The thickness (blue) and area (orange) of the oil slick (10 L crude oil added), (<b>a</b>) test 1, (<b>b</b>) test 2.</p>
Full article ">Figure 11
<p>The measured thickness (blue) and area (orange) of the oil slick (15 L crude oil used), (<b>a</b>) test 1, (<b>b</b>) test 2.</p>
Full article ">Figure 12
<p>(<b>a</b>) The measured volume of the oil slick (15 L oil added in the water tank), (<b>b</b>) the volume of oil slick averaged within each period.</p>
Full article ">
22 pages, 10248 KiB  
Article
Identifying the Optimal Radiometric Calibration Method for UAV-Based Multispectral Imaging
by Louis Daniels, Eline Eeckhout, Jana Wieme, Yves Dejaegher, Kris Audenaert and Wouter H. Maes
Remote Sens. 2023, 15(11), 2909; https://doi.org/10.3390/rs15112909 - 2 Jun 2023
Cited by 17 | Viewed by 3783
Abstract
The development of UAVs and multispectral cameras has led to remote sensing applications with unprecedented spatial resolution. However, uncertainty remains on the radiometric calibration process for converting raw images to surface reflectance. Several calibration methods exist, but the advantages and disadvantages of each [...] Read more.
The development of UAVs and multispectral cameras has led to remote sensing applications with unprecedented spatial resolution. However, uncertainty remains on the radiometric calibration process for converting raw images to surface reflectance. Several calibration methods exist, but the advantages and disadvantages of each are not well understood. We performed an empirical analysis of five different methods for calibrating a 10-band multispectral camera, the MicaSense RedEdge MX Dual Camera System, by comparing multispectral images with spectrometer measurements taken in the field on the same day. Two datasets were collected, one in clear-sky and one in overcast conditions on the same field. We found that the empirical line method (ELM), using multiple radiometric reference targets imaged at mission altitude performed best in terms of bias and RMSE. However, two user-friendly commercial solutions relying on one single grey reference panel were only slightly less accurate and resulted in sufficiently accurate reflectance maps for most applications, particularly in clear-sky conditions. In overcast conditions, the increase in accuracy of more elaborate methods was higher. Incorporating measurements of an integrated downwelling light sensor (DLS2) did not improve the bias nor RMSE, even in overcast conditions. Ultimately, the choice of the calibration method depends on required accuracy, time constraints and flight conditions. When the more accurate ELM is not possible, commercial, user-friendly solutions like the ones offered by Agisoft Metashape and Pix4D can be good enough. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the experimental site at 50°57′29.1″N 3°46′00.9″E.</p>
Full article ">Figure 2
<p><b>Left</b>: white reference panel captured in band 5 (642 nm–658 nm) of the MicaSense dual camera system. <b>Right</b>: Aerial image of the 6 RRTs captured in band 2 (459 nm–491 nm).</p>
Full article ">Figure 3
<p>Spectrometer measurements for the six gray RRTs used for corrections on 6 October. The gray bands indicate the multispectral bands of the camera. The spectrometer was calibrated using a Spectralon 99% reflectance panel before each measurement.</p>
Full article ">Figure 4
<p>Schematic representation of the 5 different methods. The inputs, calibration methods and mosaicing methods are shown. The graphs depicting the ELM with one or 6 reference targets are shown with surface reflectance (SR) on the <span class="html-italic">Y</span>-axis and camera reflectance (CR) on the <span class="html-italic">X</span>-axis. Calibrations before mosaicing were done at image-level, after mosaicing, calibrations were done on the orthomosaic.</p>
Full article ">Figure 5
<p>Overview of the workflow for calibrating multispectral images using the ELM-MP method. Rectangles depict processing steps, slanted rectangles depict inputs, trapeziums depict manual steps in the workflow and rounded shapes depict the output.</p>
Full article ">Figure 6
<p>Illustration of the two linear models for each band used for describing the relationship between at-sensor radiance and surface reflectance.</p>
Full article ">Figure 7
<p>Irradiance measured by the DLS2 sensor during the flight on 6 October (<b>a</b>) and 3 October (<b>b</b>). The temporal profile of the measurements was very similar for all bands of the camera for each flight. (<b>A</b>): Spectral irradiance, (<b>B</b>): Horizontal irradiance, (<b>C</b>): Direct irradiance, (<b>D</b>): Scattered irradiance, (<b>E</b>): Corrected irradiance. (<b>a</b>) Sunny; (<b>b</b>) Overcast.</p>
Full article ">Figure 8
<p>RMSE, bias and rRMSE comparison between the different methods on the dataset captured in clear-sky conditions. P4D-SP: <span class="html-italic">Pix4D</span> calibration, AM-SP: <span class="html-italic">Metashape</span> calibration, AM-MP: <span class="html-italic">Metashape</span> with empirical line correction on orthophoto, MS: single panel correction, MP: multiple panel correction. The mean was calculated without band 10 (842 nm).</p>
Full article ">Figure 9
<p>RMSE, bias and rRMSE comparison between the different methods on the dataset captured in overcast conditions. P4D-SP: <span class="html-italic">Pix4D</span> calibration, AM-SP: <span class="html-italic">Metashape</span> calibration, AM-MP: <span class="html-italic">Metashape</span> with empirical line correction on orthophoto, MS: single panel correction, MP: multiple panel correction.</p>
Full article ">Figure 10
<p>Detail image of band 8 (717 nm) showing slight color differences between the calibration methods. The depicted area shows the edge of the beans (top left) and corn (bottom right) plots with grass in between.</p>
Full article ">Figure 11
<p>Scatterplot of the different land covers for band 8 (717 nm) for the overcast dataset. (<b>A</b>): P4D-SP method, (<b>B</b>): AM-SP method, (<b>C</b>): AM-MP method, (<b>D</b>): MS-SP method, (<b>E</b>): MS-MP method, (<b>F</b>): ELM-MP method.</p>
Full article ">Figure A1
<p>Output of the MS-MP (<b>a</b>) and ELM-MP (<b>b</b>) methods for the clear-sky dataset. Subplots (<b>A</b>–<b>J</b>) depict the different bands of the <span class="html-italic">MicaSense Dual Camera System</span> from band 1–10.</p>
Full article ">
35 pages, 8889 KiB  
Article
JO-TADP: Learning-Based Cooperative Dynamic Resource Allocation for MEC–UAV-Enabled Wireless Network
by Shabeer Ahmad, Jinling Zhang, Adil Khan, Umar Ajaib Khan and Babar Hayat
Drones 2023, 7(5), 303; https://doi.org/10.3390/drones7050303 - 4 May 2023
Cited by 7 | Viewed by 2030
Abstract
Providing robust communication services to mobile users (MUs) is a challenging task due to the dynamicity of MUs. Unmanned aerial vehicles (UAVs) and mobile edge computing (MEC) are used to improve connectivity by allocating resources to MUs more efficiently in a dynamic environment. [...] Read more.
Providing robust communication services to mobile users (MUs) is a challenging task due to the dynamicity of MUs. Unmanned aerial vehicles (UAVs) and mobile edge computing (MEC) are used to improve connectivity by allocating resources to MUs more efficiently in a dynamic environment. However, energy consumption and lifetime issues in UAVs severely limit the resources and communication services. In this paper, we propose a dynamic cooperative resource allocation scheme for MEC–UAV-enabled wireless networks called joint optimization of trajectory, altitude, delay, and power (JO-TADP) using anarchic federated learning (AFL) and other learning algorithms to enhance data rate, use rate, and resource allocation efficiency. Initially, the MEC–UAVs are optimally positioned based on the MU density using the beluga whale optimization (BLWO) algorithm. Optimal clustering is performed in terms of splitting and merging using the triple-mode density peak clustering (TM-DPC) algorithm based on user mobility. Moreover, the trajectory, altitude, and hovering time of MEC–UAVs are predicted and optimized using the self-simulated inner attention long short-term memory (SSIA-LSTM) algorithm. Finally, the MUs and MEC–UAVs play auction games based on the classified requests, using an AFL-based cross-scale attention feature pyramid network (CSAFPN) and enhanced deep Q-learning (EDQN) algorithms for dynamic resource allocation. To validate the proposed approach, our system model has been simulated in Network Simulator 3.26 (NS-3.26). The results demonstrate that the proposed work outperforms the existing works in terms of connectivity, energy efficiency, resource allocation, and data rate. Full article
Show Figures

Figure 1

Figure 1
<p>The Overall Architecture of Proposed JO-TDAP-based Resource Allocation in MEC–UAV Networks.</p>
Full article ">Figure 2
<p>TM-DPC Clustering.</p>
Full article ">Figure 3
<p>SSIA-LSTM-based trajectory and altitude optimization.</p>
Full article ">Figure 4
<p>EDQN-based Cooperative Shared Resource Allocation.</p>
Full article ">Figure 5
<p>Number of MEC–UAVs vs. connectivity.</p>
Full article ">Figure 6
<p>Number of MEC–UAVs vs. energy consumption.</p>
Full article ">Figure 7
<p>Number of MEC–UAVs vs. utility rate.</p>
Full article ">Figure 8
<p>Number of MEC–UAVs vs. data rate.</p>
Full article ">Figure 9
<p>Number of MUs vs. data rate.</p>
Full article ">Figure 10
<p>Number of MUs vs. delay time.</p>
Full article ">Figure 11
<p>Number of MEC–UAVs vs. resource allocation efficiency.</p>
Full article ">Figure 12
<p>Computational time with respect to the number of MUs.</p>
Full article ">
25 pages, 20160 KiB  
Article
The Development of Copper Clad Laminate Horn Antennas for Drone Interferometric Synthetic Aperture Radar
by Anthony Carpenter, James A. Lawrence, Richard Ghail and Philippa J. Mason
Drones 2023, 7(3), 215; https://doi.org/10.3390/drones7030215 - 20 Mar 2023
Cited by 5 | Viewed by 3731
Abstract
Interferometric synthetic aperture radar (InSAR) is an active remote sensing technique that typically utilises satellite data to quantify Earth surface and structural deformation. Drone InSAR should provide improved spatial-temporal data resolutions and operational flexibility. This necessitates the development of custom radar hardware for [...] Read more.
Interferometric synthetic aperture radar (InSAR) is an active remote sensing technique that typically utilises satellite data to quantify Earth surface and structural deformation. Drone InSAR should provide improved spatial-temporal data resolutions and operational flexibility. This necessitates the development of custom radar hardware for drone deployment, including antennas for the transmission and reception of microwave electromagnetic signals. We present the design, simulation, fabrication, and testing of two lightweight and inexpensive copper clad laminate (CCL)/printed circuit board (PCB) horn antennas for C-band radar deployed on the DJI Matrice 600 Pro drone. This is the first demonstration of horn antennas fabricated from CCL, and the first complete overview of antenna development for drone radar applications. The dimensions are optimised for the desired gain and centre frequency of 19 dBi and 5.4 GHz, respectively. The S11, directivity/gain, and half power beam widths (HPBW) are simulated in MATLAB, with the antennas tested in a radio frequency (RF) electromagnetic anechoic chamber using a calibrated vector network analyser (VNA) for comparison. The antennas are highly directive with gains of 15.80 and 16.25 dBi, respectively. The reduction in gain compared to the simulated value is attributed to a resonant frequency shift caused by the brass input feed increasing the electrical dimensions. The measured S11 and azimuth HPBW either meet or exceed the simulated results. A slight performance disparity between the two antennas is attributed to minor artefacts of the manufacturing and testing processes. The incorporation of the antennas into the drone payload is presented. Overall, both antennas satisfy our performance criteria and highlight the potential for CCL/PCB/FR-4 as a lightweight and inexpensive material for custom antenna production in drone radar and other antenna applications. Full article
Show Figures

Figure 1

Figure 1
<p>Antenna waveguide schematic and dimensions (mm): (<b>a</b>) XY-plane view; (<b>b</b>) YZ-plane view.</p>
Full article ">Figure 2
<p>Horn antenna dimensions schematic: (<b>a</b>) YZ-plane view; (<b>b</b>) XZ-plane view.</p>
Full article ">Figure 3
<p>Simulated S<sub>11</sub> results from 1 MHz to 6 GHz, with −14.448 dB at <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>f</mi> </mrow> <mrow> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math> (5.40 GHz).</p>
Full article ">Figure 4
<p>Simulated antenna directivity pattern with antenna overlay and a maximum value of 18.69 dBi at <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>f</mi> </mrow> <mrow> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math> (5.40 GHz).</p>
Full article ">Figure 5
<p>Simulated antenna beamwidths, with a maximum directivity of 18.69 dBi at f<sub>0</sub> (5.40 GHz), in the: (<b>a</b>) Azimuth-plane (Azimuth 1°–360°, Elevation 0°); (<b>b</b>) Elevation-plane (Azimuth 0°, Elevation 1°–360°). Azimuth and Elevation HPBW values (Δθ) of (<b>a</b>) 18.65° and (<b>b</b>) 18.30° respectively, calculated as C2-C1.</p>
Full article ">Figure 6
<p>Soldered monopole schematic with numbered components and dimensions (mm).</p>
Full article ">Figure 7
<p>Photographs of the CCL horn antennas: (<b>a</b>) external view; (<b>b</b>) internal view.</p>
Full article ">Figure 8
<p>Antenna 1 S<sub>11</sub> experimental setup in the anechoic chamber.</p>
Full article ">Figure 9
<p>Radiation pattern experimental setup in the anechoic chamber. Receiving Antenna 1 (foreground) upon a foam wedge and modified turntable with polar plot. Transmitting Antenna 2 upon a larger foam wedge (background) at a farfield distance of 2.70 m. Antenna 1 is positioned at 180° θ from Antenna 2.</p>
Full article ">Figure 10
<p>Drone radar payload schematic with annotated dimensions (mm) and annotated features described in <a href="#drones-07-00215-t002" class="html-table">Table 2</a>: (<b>a</b>) top-view; (<b>b</b>) bottom-view.</p>
Full article ">Figure 11
<p>Antenna orientation with respect to the drone orientation and direction of flight, to achieve a side-looking viewing geometry of a target (<b>a</b>) forwards flight, with side-mounted antennas; (<b>b</b>) sideways flight, with forward-mounted antennas.</p>
Full article ">Figure 12
<p>S<sub>11</sub> results for Antenna 1 (blue) and 2 (red) from 0.00 to 6.00 GHz, with lines at x = 5.40 GHz (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>f</mi> </mrow> <mrow> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math>), y = −10 dB (acceptable threshold), and y = −20 dB (very good threshold). Dotted lines are measured S<sub>11</sub> values at 501 frequency points from 1 MHz to 6.00 GHz. Solid lines are moving mean averages with a window length of 9 measurements (0.10 GHz). S<sub>11</sub> at <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>f</mi> </mrow> <mrow> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math> is −14.20 dB and −20.70 dB for Antennas 1 and 2 respectively.</p>
Full article ">Figure 13
<p>Gain results for Antenna 1 (blue) and 2 (red) between 5.00 and 6.00 GHz, with line at x = 5.40 GHz (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>f</mi> </mrow> <mrow> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math>). Dotted lines are measured G<sub>Rel</sub> values. Solid lines (G<sub>AUT</sub>) are the sum of G<sub>Ref</sub> (black) and G<sub>Rel</sub> at each frequency point. G<sub>AUT</sub> at <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>f</mi> </mrow> <mrow> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math> is 15.80 and 16.25 dBi for Antennas 1 and 2, respectively.</p>
Full article ">Figure 14
<p>Azimuth radiation pattern results (Azimuth 1°–360°, Elevation 0°), with maximum directivities of 15.80 and 16.25 dBi at <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>f</mi> </mrow> <mrow> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math> (5.40 GHz) for (<b>a</b>) Antenna 1; (<b>b</b>) Antenna 2, respectively. Azimuth HPBW values (<math display="inline"><semantics> <mrow> <mi mathvariant="normal">Δ</mi> <mi>θ</mi> </mrow> </semantics></math> ) of 15.89° (<b>a</b>) and 15.86° (<b>b</b>) respectively, calculated as C2-C1.</p>
Full article ">Figure 15
<p>Drone radar payload, with the CCL horn antennas, Ettus USRP E312 SDR and 3D printed connection stabiliser, and Raspberry Pi 4 (on the back).</p>
Full article ">Figure 16
<p>Drone radar payload attached to the DJI Matrice 600 Pro.</p>
Full article ">Figure 17
<p>Internal waveguide photographs of (<b>a</b>) Antenna 1, with untidy soldering; (<b>b</b>) Antenna 2, with tidy soldering.</p>
Full article ">Figure 18
<p>Azimuth radiation pattern main lobe results (Azimuth 330°–30°, Elevation 0°) for Antenna 1 (blue) and Antenna 2 (red), with simulated azimuth radiation pattern main lobe at the same angular range (black): (<b>a</b>) polar pattern; (<b>b</b>) magnitude plot.</p>
Full article ">
22 pages, 9548 KiB  
Article
Mine Pit Wall Geological Mapping Using UAV-Based RGB Imaging and Unsupervised Learning
by Peng Yang, Kamran Esmaeili, Sebastian Goodfellow and Juan Carlos Ordóñez Calderón
Remote Sens. 2023, 15(6), 1641; https://doi.org/10.3390/rs15061641 - 18 Mar 2023
Cited by 6 | Viewed by 2808
Abstract
In surface mining operations, geological pit wall mapping is important since it provides significant information on the surficial geological features throughout the pit wall faces, thereby improving geological certainty and operational planning. Conventional pit wall geological mapping techniques generally rely on close visual [...] Read more.
In surface mining operations, geological pit wall mapping is important since it provides significant information on the surficial geological features throughout the pit wall faces, thereby improving geological certainty and operational planning. Conventional pit wall geological mapping techniques generally rely on close visual observations and laboratory testing results, which can be both time- and labour-intensive and can expose the technical staff to different safety hazards on the ground. In this work, a case study was conducted by investigating the use of drone-acquired RGB images for pit wall mapping. High spatial resolution RGB image data were collected using a commercially available unmanned aerial vehicle (UAV) at two gold mines in Nevada, USA. Cluster maps were produced using unsupervised learning algorithms, including the implementation of convolutional autoencoders, to explore the use of unlabelled image data for pit wall geological mapping purposes. While the results are promising for simple geological settings, they deviate from human-labelled ground truth maps in more complex geological conditions. This indicates the need to further optimize and explore the algorithms to increase robustness for more complex geological cases. Full article
Show Figures

Figure 1

Figure 1
<p>Location of Kinross Gold’s Bald Mountain mine (39°56′N, 115°36′W, WGS 84) and McEwen Mining’s Gold Bar mine (39°47′N, 116°20′W, WGS 84) shown by the red and blue diamond symbols, respectively.</p>
Full article ">Figure 2
<p>(<b>a</b>) The northern area of Kinross Gold Bald Mountain Mine’s Top Pit. (<b>b</b>) The southeastern area of McEwen Mining Gold Bar Mine’s Pick Pit. The highlighted regions roughly indicate the pit wall sections that were covered.</p>
Full article ">Figure 3
<p>(<b>a</b>) Dense point clouds created from pit wall images of Top Pit. (<b>b</b>) Dense point clouds created from pit wall images of Pick Pit. Point cloud generation was done via Agisoft Metashape on high-quality, mild-filtering setting. The regions in the red boxes are the study areas for dataset creation and analysis.</p>
Full article ">Figure 4
<p>(<b>a</b>) The orthomosaic of the selected pit wall section for Top Pit (simple case). (<b>b</b>) The corresponding “novice-labelled” ground truth map.</p>
Full article ">Figure 4 Cont.
<p>(<b>a</b>) The orthomosaic of the selected pit wall section for Top Pit (simple case). (<b>b</b>) The corresponding “novice-labelled” ground truth map.</p>
Full article ">Figure 5
<p>(<b>a</b>) The orthomosaic of the selected pit wall section for Pick Pit (complex case). (<b>b</b>) The corresponding “novice-labelled” ground truth map.</p>
Full article ">Figure 5 Cont.
<p>(<b>a</b>) The orthomosaic of the selected pit wall section for Pick Pit (complex case). (<b>b</b>) The corresponding “novice-labelled” ground truth map.</p>
Full article ">Figure 6
<p>An illustration of the cluster map generation process using K-Means clustering only.</p>
Full article ">Figure 7
<p>An illustration of the cluster map generation process using Autoencoder-first K-Means clustering.</p>
Full article ">Figure 8
<p>Coloured cluster maps of the Top Pit pit wall orthomosaic. Colour assignment of the cluster groups was based on visual comparison to the ground truth in terms of spatial correspondence. (<b>a</b>) The K-Means clustering map; (<b>b</b>) the autoencoder-first (Model MT) K-Means clustering map; (<b>c</b>) the autoencoder-first (Model PY) K-Means clustering map.</p>
Full article ">Figure 9
<p>Coloured cluster map of the Top Pit orthomosaic using ISO Cluster Classification Tool for four classes.</p>
Full article ">Figure 10
<p>Coloured cluster maps of the Pick Pit pit wall orthomosaic. Colour assignment of the cluster groups was based on visual comparison to the ground truth in terms of spatial correspondence. (<b>a</b>) The K-Means clustering-only map; (<b>b</b>) the autoencoder-first (Model MT) K-Means clustering map; (<b>c</b>) the autoencoder-first (Model PY) K-Means clustering map.</p>
Full article ">Figure 11
<p>Coloured cluster map of the Pick Pit orthomosaic using ISO Cluster Classification Tool for three classes.</p>
Full article ">
19 pages, 2928 KiB  
Article
Estimating Black Oat Biomass Using Digital Surface Models and a Vegetation Index Derived from RGB-Based Aerial Images
by Lucas Renato Trevisan, Lisiane Brichi, Tamara Maria Gomes and Fabrício Rossi
Remote Sens. 2023, 15(5), 1363; https://doi.org/10.3390/rs15051363 - 28 Feb 2023
Cited by 4 | Viewed by 1827
Abstract
Responsible for food production and industry inputs, agriculture needs to adapt to worldwide increasing demands and environmental requirements. In this scenario, black oat has gained environmental and economic importance since it can be used in no-tillage systems, green manure, or animal feed supplementation. [...] Read more.
Responsible for food production and industry inputs, agriculture needs to adapt to worldwide increasing demands and environmental requirements. In this scenario, black oat has gained environmental and economic importance since it can be used in no-tillage systems, green manure, or animal feed supplementation. Despite its importance, few studies have been conducted to introduce more accurate and technological applications. Plant height (H) correlates with biomass production, which is related to yield. Similarly, productivity status can be estimated from vegetation indices (VIs). The use of unmanned aerial vehicles (UAV) for imaging enables greater spatial and temporal resolutions from which to derive information such as H and VI. However, faster and more accurate methodologies are necessary for the application of this technology. This study intended to obtain high-quality digital surface models (DSMs) and orthoimages from UAV-based RGB images via a direct-to-process means; that is, without the use of ground control points or image pre-processing. DSMs and orthoimages were used to derive H (HDSM) and VIs (VIRGB), which were used for H and dry biomass (DB) modeling. Results showed that HDSM presented a strong correlation with actual plant height (HREF) (R2 = 0.85). Modeling biomass based on HDSM demonstrated better performance for data collected up until and including the grain filling (R2 = 0.84) and flowering (R2 = 0.82) stages. Biomass modeling based on VIRGB performed better for data collected up until and including the booting stage (R2 = 0.80). The best results for biomass estimation were obtained by combining HDSM and VIRGB, with data collected up until and including the grain filling stage (R2 = 0.86). Therefore, the presented methodology has permitted the generation of trustworthy models for estimating the H and DB of black oats. Full article
Show Figures

Figure 1

Figure 1
<p>Graphical abstract of the present study to estimate black oat biomass using RGB-based UAV images.</p>
Full article ">Figure 2
<p>Photogrammetric products resulting from the flight performed on September 15. (<b>a</b>) Orthoimage. (<b>b</b>) Digital surface model.</p>
Full article ">Figure 3
<p>Regression models between H<sub>REF</sub> and H<sub>DSM</sub>. (<b>A</b>) Regression model using all data; (<b>B</b>) regression model using data collected up until and including the flowering stage; (<b>C</b>) regression model using data collected up until and including booting stage. RMSE = Root Mean Square Error (m), n = number of observations. For all models <span class="html-italic">p</span> &lt; 0.05.</p>
Full article ">Figure 4
<p>(<b>A</b>,<b>C</b>,<b>E</b>): Regression models between H<sub>REF</sub> and H<sub>DSM</sub> using modeling dataset for all stages, the flowering stage, and the booting stage respectively; (<b>B</b>,<b>D</b>,<b>F</b>): performance of the regression models with the validation dataset for all stages, the flowering stage, and the booting stage, respectively. RMSE = root mean square error (m), n = number of observations. For all models <span class="html-italic">p</span> &lt; 0.05.</p>
Full article ">Figure 5
<p>Validation for the model using combinatory regression with H<sub>DSM</sub> and TCVI.</p>
Full article ">
16 pages, 9064 KiB  
Article
Fine Classification of UAV Urban Nighttime Light Images Based on Object-Oriented Approach
by Daoquan Zhang, Deping Li, Liang Zhou and Jiejie Wu
Sensors 2023, 23(4), 2180; https://doi.org/10.3390/s23042180 - 15 Feb 2023
Cited by 5 | Viewed by 2228
Abstract
Fine classification of urban nighttime lighting is a key prerequisite step for small-scale nighttime urban research. In order to fill the gap of high-resolution urban nighttime light image classification and recognition research, this paper is based on a small rotary-wing UAV platform, taking [...] Read more.
Fine classification of urban nighttime lighting is a key prerequisite step for small-scale nighttime urban research. In order to fill the gap of high-resolution urban nighttime light image classification and recognition research, this paper is based on a small rotary-wing UAV platform, taking the nighttime static monocular tilted light images of communities near Meixi Lake in Changsha City as research data. Using an object-oriented classification method to fully extract the spectral, textural and geometric features of urban nighttime lights, we build four types of classification models based on random forest (RF), support vector machine (SVM), K-nearest neighbor (KNN) and decision tree (DT), respectively, to finely extract five types of nighttime lights: window light, neon light, road reflective light, building reflective light and background. The main conclusions are as follows: (i) The equal division of the image into three regions according to the visual direction can alleviate the variable scale problem of monocular tilted images, and the multiresolution segmentation results combined with Canny edge detection are more suitable for urban nighttime lighting images; (ii) RF has the highest classification accuracy among the four classification algorithms, with an overall classification accuracy of 95.36% and a kappa coefficient of 0.9381 in the far view region, followed by SVM, KNN and DT as the worst; (iii) Among the fine classification results of urban light types, window light and background have the highest classification accuracy, with both UA and PA above 93% in the RF classification model, while road reflective light has the lowest accuracy; (iv) Among the selected classification features, the spectral features have the highest contribution rates, which are above 59% in all three regions, followed by the textural features and the geometric features with the smallest contribution rates. This paper demonstrates the feasibility of nighttime UAV static monocular tilt image data for fine classification of urban light types based on an object-oriented classification approach, provides data and technical support for small-scale urban nighttime research such as community building identification and nighttime human activity perception. Full article
Show Figures

Figure 1

Figure 1
<p>The study area.</p>
Full article ">Figure 2
<p>UAV nighttime city light image classification flow chart.</p>
Full article ">Figure 3
<p>Equally divide the image into near, middle and far images along the direction of the field of view.</p>
Full article ">Figure 4
<p>ROC−LV diagrams for near, middle and far view regions.</p>
Full article ">Figure 5
<p>Comparison of segmentation results. (The selected area ① ② ③ indicates the splitting effect of the Window Light, and the selected area ④ ⑤ indicates the split effect between the Building Reflective Light and the Background).</p>
Full article ">Figure 6
<p>Fine-grained classification results of four machine learning algorithms for urban nighttime lights in near, middle and far view regions.</p>
Full article ">Figure 7
<p>Feature contribution ranking.</p>
Full article ">Figure 8
<p>Guanshaling night lighting image and region division.</p>
Full article ">
19 pages, 44461 KiB  
Article
AFL-Net: Attentional Feature Learning Network for Building Extraction from Remote Sensing Images
by Yue Qiu, Fang Wu, Haizhong Qian, Renjian Zhai, Xianyong Gong, Jichong Yin, Chengyi Liu and Andong Wang
Remote Sens. 2023, 15(1), 95; https://doi.org/10.3390/rs15010095 - 24 Dec 2022
Cited by 7 | Viewed by 2012
Abstract
Convolutional neural networks (CNNs) perform well in tasks of segmenting buildings from remote sensing images. However, the intraclass heterogeneity of buildings is high in images, while the interclass homogeneity between buildings and other nonbuilding objects is low. This leads to an inaccurate distinction [...] Read more.
Convolutional neural networks (CNNs) perform well in tasks of segmenting buildings from remote sensing images. However, the intraclass heterogeneity of buildings is high in images, while the interclass homogeneity between buildings and other nonbuilding objects is low. This leads to an inaccurate distinction between buildings and complex backgrounds. To overcome this challenge, we propose an Attentional Feature Learning Network (AFL-Net) that can accurately extract buildings from remote sensing images. We designed an attentional multiscale feature fusion (AMFF) module and a shape feature refinement (SFR) module to improve building recognition accuracy in complex environments. The AMFF module adaptively adjusts the weights of multi-scale features through the attention mechanism, which enhances the global perception and ensures the integrity of building segmentation results. The SFR module captures the shape features of the buildings, which enhances the network capability for identifying the area between building edges and surrounding nonbuilding objects and reduces the over-segmentation of buildings. An ablation study was conducted with both qualitative and quantitative analyses, verifying the effectiveness of the AMFF and SFR modules. The proposed AFL-Net achieved 91.37, 82.10, 73.27, and 79.81% intersection over union (IoU) values on the WHU Building Aerial Imagery, Inria Aerial Image Labeling, Massachusetts Buildings, and Building Instances of Typical Cities in China datasets, respectively. Thus, the AFL-Net offers the prospect of application for successful extraction of buildings from remote sensing images. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the AFL-Net framework. The encoder extracts the features through the backbone, outputting four feature maps at different scales. The decoder fuses the feature maps via the AMFF module, optimizes the building shape features via the SFR module, and outputs the building segmentation mask after the classifier.</p>
Full article ">Figure 2
<p>Schematic diagram of the AMFF module structure. The attention mechanism facilitates adaptive adjustment of the weights of the distinctive features.</p>
Full article ">Figure 3
<p>Structure diagram of attention mechanism in the AMFF module. Attentional weights are calculated at the channel and spatial dimensions.</p>
Full article ">Figure 4
<p>Structure diagram of the SFR module. Deformable convolution and dilated convolution expand the receptive field.</p>
Full article ">Figure 5
<p>Comparative results from the selected models on the WHU dataset.</p>
Full article ">Figure 6
<p>Comparative results from the selected models on the Inria dataset.</p>
Full article ">Figure 7
<p>Comparative results from the selected models on the Massachusetts dataset.</p>
Full article ">Figure 8
<p>Comparative results from the selected models on the BITCC dataset.</p>
Full article ">Figure 9
<p>(<b>a</b>–<b>d</b>) Sample building segmentation results from the selected models with the WHU dataset (comparative experiment).</p>
Full article ">Figure 10
<p>(<b>a</b>–<b>d</b>) Sample building segmentation results from the selected models with the Inria dataset (comparative experiment).</p>
Full article ">Figure 11
<p>(<b>a</b>–<b>d</b>) Sample building segmentation results from the selected models with the Massachusetts dataset (comparative experiment).</p>
Full article ">Figure 12
<p>(<b>a</b>–<b>d</b>) Sample building segmentation results from the selected models with the BITCC dataset (comparative experiment).</p>
Full article ">Figure 13
<p>Comparison of the accuracy and complexity of selected models, annotated with IoU and parameters in each model.</p>
Full article ">Figure 14
<p>Comparison of the accuracy and complexity of selected models, annotated with IoU and computational cost in each model.</p>
Full article ">Figure 15
<p>Samples of building extraction results by different models with four datasets (ablation study).</p>
Full article ">Figure 16
<p>Visualization of feature maps by the selected models with four datasets (ablation study).</p>
Full article ">Figure 17
<p>Comparison of the training and inference speeds of the selected models.</p>
Full article ">
Back to TopTop