Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,244)

Search Parameters:
Keywords = agricultural vehicle

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 51328 KiB  
Article
A Shortest Distance Priority UAV Path Planning Algorithm for Precision Agriculture
by Guoqing Zhang, Jiandong Liu, Wei Luo, Yongxiang Zhao, Ruiyin Tang, Keyu Mei and Penggang Wang
Sensors 2024, 24(23), 7514; https://doi.org/10.3390/s24237514 - 25 Nov 2024
Abstract
Unmanned aerial vehicles (UAVs) have made significant advances in autonomous sensing, particularly in the field of precision agriculture. Effective path planning is critical for autonomous navigation in large orchards to ensure that UAVs are able to recognize the optimal route between the start [...] Read more.
Unmanned aerial vehicles (UAVs) have made significant advances in autonomous sensing, particularly in the field of precision agriculture. Effective path planning is critical for autonomous navigation in large orchards to ensure that UAVs are able to recognize the optimal route between the start and end points. When UAVs perform tasks such as crop protection, monitoring, and data collection in orchard environments, they must be able to adapt to dynamic conditions. To address these challenges, this study proposes an enhanced Q-learning algorithm designed to optimize UAV path planning by combining static and dynamic obstacle avoidance features. A shortest distance priority (SDP) strategy is integrated into the learning process to minimize the distance the UAV must travel to reach the target. In addition, the root mean square propagation (RMSP) method is used to dynamically adjust the learning rate according to gradient changes, which accelerates the learning process and improves path planning efficiency. In this study, firstly, the proposed method was compared with state-of-the-art path planning techniques (including A-star, Dijkstra, and traditional Q-learning) in terms of learning time and path length through a grid-based 2D simulation environment. The results showed that the proposed method significantly improved performance compared to existing methods. In addition, 3D simulation experiments were conducted in the AirSim virtual environment. Due to the complexity of the 3D state, a deep neural network was used to calculate the Q-value based on the proposed algorithm. The results indicate that the proposed method can achieve the shortest path planning and obstacle avoidance operations in an orchard 3D simulation environment. Therefore, drones equipped with this algorithm are expected to make outstanding contributions to the development of precision agriculture through intelligent navigation and obstacle avoidance. Full article
(This article belongs to the Special Issue Application of UAV and Sensing in Precision Agriculture)
Show Figures

Figure 1

Figure 1
<p>Orangery UAV collecting data.</p>
Full article ">Figure 2
<p>30 × 30 simulated grid obstacle environment.</p>
Full article ">Figure 3
<p>Discrete action set.</p>
Full article ">Figure 4
<p>UAV trajectory planning and collision avoidance training framework.</p>
Full article ">Figure 5
<p>Structure of Q-learning algorithm.</p>
Full article ">Figure 6
<p>Deep Q-Learning algorithm structure.</p>
Full article ">Figure 7
<p>Screen shot of the AirSim environment.</p>
Full article ">Figure 8
<p>Neural network architecture.</p>
Full article ">Figure 9
<p>The 5 × 5 action space.</p>
Full article ">Figure 10
<p>Performance of various unmanned aerial vehicle path planning algorithms in the presence of obstacles. (<b>a</b>) A-star. (<b>b</b>) Dijkstra. (<b>c</b>) Original Q-learning. (<b>d</b>) Proposed Q-learning. (<b>e</b>) Proposed Q-learning in the presence of two dynamic obstacles. (<b>f</b>) Proposed Q-learning in the presence of four dynamic obstacles.</p>
Full article ">Figure 11
<p>Changes in step count during training in a static environment. (<b>a</b>) steps of Q-learning algorithm; (<b>b</b>) steps of proposed Q-learning algorithm.</p>
Full article ">Figure 12
<p>Changes in step count during training in two dynamic environments. (<b>a</b>) steps of Q-learning algorithm; (<b>b</b>) steps of proposed Q-learning algorithm.</p>
Full article ">Figure 13
<p>Changes in step count during training in four dynamic environments. (<b>a</b>) steps of Q-learning algorithm; (<b>b</b>) steps of proposed Q-learning algorithm.</p>
Full article ">Figure 14
<p>Changes in cumulative rewards during training in a static environment. (<b>a</b>) steps of Q-learning algorithm; (<b>b</b>) steps of proposed Q-learning algorithm.</p>
Full article ">Figure 15
<p>Changes in cumulative rewards during training in two dynamic obstacle environments. (<b>a</b>) steps of Q-learning algorithm; (<b>b</b>) steps of proposed Q-learning algorithm.</p>
Full article ">Figure 16
<p>Changes in cumulative rewards during training in four dynamic obstacle environments. (<b>a</b>) steps of Q-learning algorithm; (<b>b</b>) steps of proposed Q-learning algorithm.</p>
Full article ">Figure 17
<p>Changes in learning rates.</p>
Full article ">Figure 18
<p>Training process diagram. (<b>a</b>) represents the loss function plot and (<b>b</b>) represents the maximum reward value.</p>
Full article ">Figure 19
<p>The obstacle avoidance process of UAV. (<b>a</b>–<b>d</b>) represents the UAV avoiding obstacles; (<b>e</b>–<b>h</b>) represents the UAV crossing an obstacle.</p>
Full article ">
24 pages, 10412 KiB  
Article
Deep Learning for Weed Detection and Segmentation in Agricultural Crops Using Images Captured by an Unmanned Aerial Vehicle
by Josef Augusto Oberdan Souza Silva, Vilson Soares de Siqueira, Marcio Mesquita, Luís Sérgio Rodrigues Vale, Thiago do Nascimento Borges Marques, Jhon Lennon Bezerra da Silva, Marcos Vinícius da Silva, Lorena Nunes Lacerda, José Francisco de Oliveira-Júnior, João Luís Mendes Pedroso de Lima and Henrique Fonseca Elias de Oliveira
Remote Sens. 2024, 16(23), 4394; https://doi.org/10.3390/rs16234394 - 24 Nov 2024
Viewed by 244
Abstract
Artificial Intelligence (AI) has changed how processes are developed, and decisions are made in the agricultural area replacing manual and repetitive processes with automated and more efficient ones. This study presents the application of deep learning techniques to detect and segment weeds in [...] Read more.
Artificial Intelligence (AI) has changed how processes are developed, and decisions are made in the agricultural area replacing manual and repetitive processes with automated and more efficient ones. This study presents the application of deep learning techniques to detect and segment weeds in agricultural crops by applying models with different architectures in the analysis of images captured by an Unmanned Aerial Vehicle (UAV). This study contributes to the computer vision field by comparing the performance of the You Only Look Once (YOLOv8n, YOLOv8s, YOLOv8m, and YOLOv8l), Mask R-CNN (with framework Detectron2), and U-Net models, making public the dataset with aerial images of soybeans and beans. The models were trained using a dataset consisting of 3021 images, randomly divided into test, validation, and training sets, which were annotated, resized, and increased using the Roboflow application interface. Evaluation metrics were used, which included training efficiency (mAP50 and mAP50-90), precision, accuracy, and recall in the model’s evaluation and comparison. The YOLOv8s variant achieved higher performance with an mAP50 of 97%, precision of 99.7%, and recall of 99% when compared to the other models. The data from this manuscript show that deep learning models can generate efficient results for automatic weed detection when trained with a well-labeled and large set. Furthermore, this study demonstrated the great potential of using advanced object segmentation algorithms in detecting weeds in soybean and bean crops. Full article
19 pages, 53371 KiB  
Article
Efficient UAV-Based Automatic Classification of Cassava Fields Using K-Means and Spectral Trend Analysis
by Apinya Boonrang, Pantip Piyatadsananon and Tanakorn Sritarapipat
AgriEngineering 2024, 6(4), 4406-4424; https://doi.org/10.3390/agriengineering6040250 - 22 Nov 2024
Viewed by 300
Abstract
High-resolution images captured by Unmanned Aerial Vehicles (UAVs) play a vital role in precision agriculture, particularly in evaluating crop health and detecting weeds. However, the detailed pixel information in these images makes classification a time-consuming and resource-intensive process. Despite these challenges, UAV imagery [...] Read more.
High-resolution images captured by Unmanned Aerial Vehicles (UAVs) play a vital role in precision agriculture, particularly in evaluating crop health and detecting weeds. However, the detailed pixel information in these images makes classification a time-consuming and resource-intensive process. Despite these challenges, UAV imagery is increasingly utilized for various agricultural classification tasks. This study introduces an automatic classification method designed to streamline the process, specifically targeting cassava plants, weeds, and soil classification. The approach combines K-means unsupervised classification with spectral trend-based labeling, significantly reducing the need for manual intervention. The method ensures reliable and accurate classification results by leveraging color indices derived from RGB data and applying mean-shift filtering parameters. Key findings reveal that the combination of the blue (B) channel, Visible Atmospherically Resistant Index (VARI), and color index (CI) with filtering parameters, including a spatial radius (sp) = 5 and a color radius (sr) = 10, effectively differentiates soil from vegetation. Notably, using the green (G) channel, excess red (ExR), and excess green (ExG) with filtering parameters (sp = 10, sr = 20) successfully distinguishes cassava from weeds. The classification maps generated by this method achieved high kappa coefficients of 0.96, with accuracy levels comparable to supervised methods like Random Forest classification. This technique offers significant reductions in processing time compared to traditional methods and does not require training data, making it adaptable to different cassava fields captured by various UAV-mounted optical sensors. Ultimately, the proposed classification process minimizes manual intervention by incorporating efficient pre-processing steps into the classification workflow, making it a valuable tool for precision agriculture. Full article
(This article belongs to the Special Issue Computer Vision for Agriculture and Smart Farming)
Show Figures

Figure 1

Figure 1
<p>Study area of cassava fields captured by the DJI Phantom 4 Pro sensor.</p>
Full article ">Figure 2
<p>Study area of cassava fields captured by the DJI Phantom 4 sensor.</p>
Full article ">Figure 3
<p>Proposed classification process.</p>
Full article ">Figure 4
<p>Boxplot of the spectral value of classes.</p>
Full article ">Figure 5
<p>Kappa coefficient of K-means, RF, and the proposed classification process.</p>
Full article ">Figure 6
<p>Classification results using the proposed classification process: (<b>a</b>) Plot 1, showing results from an area with patchy weeds and thin weed patches; (<b>b</b>) Plot 5, showing results from an area with fewer weed patches and dense weed coverage; (<b>c</b>) Plot 8, showing results from an area with varying light illumination.</p>
Full article ">
13 pages, 9487 KiB  
Article
Cotton Yield Prediction via UAV-Based Cotton Boll Image Segmentation Using YOLO Model and Segment Anything Model (SAM)
by Janvita Reddy, Haoyu Niu, Jose L. Landivar Scott, Mahendra Bhandari, Juan A. Landivar, Craig W. Bednarz and Nick Duffield
Remote Sens. 2024, 16(23), 4346; https://doi.org/10.3390/rs16234346 - 21 Nov 2024
Viewed by 268
Abstract
Accurate cotton yield prediction is essential for optimizing agricultural practices, improving storage management, and efficiently utilizing resources like fertilizers and water, ultimately benefiting farmers economically. Traditional yield estimation methods, such as field sampling and cotton weighing, are time-consuming and labor intensive. Emerging technologies [...] Read more.
Accurate cotton yield prediction is essential for optimizing agricultural practices, improving storage management, and efficiently utilizing resources like fertilizers and water, ultimately benefiting farmers economically. Traditional yield estimation methods, such as field sampling and cotton weighing, are time-consuming and labor intensive. Emerging technologies provide a solution by offering farmers advanced forecasting tools that can significantly enhance production efficiency. In this study, the authors employ segmentation techniques on cotton crops collected using unmanned aerial vehicles (UAVs) to predict yield. The authors apply Segment Anything Model (SAM) for semantic segmentation, combined with You Only Look Once (YOLO) object detection, to enhance the cotton yield prediction model performance. By correlating segmentation outputs with yield data, we implement a linear regression model to predict yield, achieving an R2 value of 0.913, indicating the model’s reliability. This approach offers a robust framework for cotton yield prediction, significantly improving accuracy and supporting more informed decision-making in agriculture. Full article
Show Figures

Figure 1

Figure 1
<p>The cotton field on 9 November 2022.</p>
Full article ">Figure 2
<p>The step-by-step workflow from image acquisition to the generation of the orthomosaic image.</p>
Full article ">Figure 3
<p>A demonstration of the original and mask images.</p>
Full article ">Figure 4
<p>Comparison of cotton boll detection between YOLO v7 and YOLO v8 models.</p>
Full article ">Figure 5
<p>The proposed YOLO + SAM model architecture.</p>
Full article ">Figure 6
<p>Comparison of cotton boll segmentation between different models.</p>
Full article ">Figure 7
<p>Linear relationship between pixel count and actual yield across different models. The YOLOv8 + SAM has the best yield prediction performance, with a R<sup>2</sup> of 0.913.</p>
Full article ">
26 pages, 1748 KiB  
Article
Sparse Online Gaussian Process Adaptive Control of Unmanned Aerial Vehicle with Slung Payload
by Muhammed Rasit Kartal, Dmitry I. Ignatyev and Argyrios Zolotas
Drones 2024, 8(11), 687; https://doi.org/10.3390/drones8110687 - 19 Nov 2024
Viewed by 363
Abstract
In the past decade, Unmanned Aerial Vehicles (UAVs) have garnered significant attention across diverse applications, including surveillance, cargo shipping, and agricultural spraying. Despite their widespread deployment, concerns about maintaining stability and safety, particularly when carrying payloads, persist. The development of such UAV platforms [...] Read more.
In the past decade, Unmanned Aerial Vehicles (UAVs) have garnered significant attention across diverse applications, including surveillance, cargo shipping, and agricultural spraying. Despite their widespread deployment, concerns about maintaining stability and safety, particularly when carrying payloads, persist. The development of such UAV platforms necessitates the implementation of robust control mechanisms to ensure stable and precise maneuvering capabilities. Numerous UAV operations require the integration of payloads, which introduces substantial stability challenges. Notably, operations involving unstable payloads such as liquid or slung payloads pose a considerable challenge in this regard, falling into the category of mismatched uncertain systems. This study focuses on establishing stability for slung payload-carrying systems. Our approach involves a combination of various algorithms: the incremental backstepping control algorithm (IBKS), integrator backstepping (IBS), Proportional–Integral–Derivative (PID), and the Sparse Online Gaussian Process (SOGP), a machine learning technique that identifies and mitigates disturbances. With a comparison of linear and nonlinear methodologies through different scenarios, an investigation for an effective solution has been performed. Implementation of the machine learning component, employing SOGP, effectively detects and counteracts disturbances. Insights are discussed within the remit of rejecting liquid sloshing disturbance. Full article
(This article belongs to the Special Issue Advances of UAV in Precision Agriculture)
Show Figures

Figure 1

Figure 1
<p>Pendulum and UAV frames.</p>
Full article ">Figure 2
<p>Proposed cascade control system diagram.</p>
Full article ">Figure 3
<p>Proportional–Integral–Derivative(PID) controller diagram.</p>
Full article ">Figure 4
<p>Integrator backstepping diagram.</p>
Full article ">Figure 5
<p>Incremental backstepping methodology diagram.</p>
Full article ">Figure 6
<p>Anti-windup command filter diagram.</p>
Full article ">Figure 7
<p>Proposed cascade control system diagram.</p>
Full article ">Figure 8
<p>Step signal command for position and controller performance comparison.</p>
Full article ">Figure 9
<p>Step signal command for position and controller performance comparison with payload.</p>
Full article ">Figure 10
<p>UAV spraying drone visual.</p>
Full article ">Figure 11
<p>Spraying operation from the top.</p>
Full article ">Figure 12
<p>Spraying operation from the corner angle.</p>
Full article ">Figure 13
<p>X, Y, Z position error values on simulation.</p>
Full article ">Figure 14
<p>X-Y-Z position, alpha angle, beta angle and weight change relation.</p>
Full article ">
17 pages, 5786 KiB  
Article
Corn Plant In-Row Distance Analysis Based on Unmanned Aerial Vehicle Imagery and Row-Unit Dynamics
by Marko M. Kostić, Željana Grbović, Rana Waqar, Bojana Ivošević, Marko Panić, Antonio Scarfone and Aristotelis C. Tagarakis
Appl. Sci. 2024, 14(22), 10693; https://doi.org/10.3390/app142210693 - 19 Nov 2024
Viewed by 545
Abstract
Uniform spatial distribution of plants is crucial in arable crops. Seeding quality is affected by numerous parameters, including the working speed and vibrations of the seeder. Therefore, investigating effective and rapid methods to evaluate seeding quality and the parameters affecting the seeders’ performance [...] Read more.
Uniform spatial distribution of plants is crucial in arable crops. Seeding quality is affected by numerous parameters, including the working speed and vibrations of the seeder. Therefore, investigating effective and rapid methods to evaluate seeding quality and the parameters affecting the seeders’ performance is of high importance. With the latest advancements in unmanned aerial vehicle (UAV) technology, the potential for acquiring accurate agricultural data has significantly increased, making UAVs an ideal tool for scouting applications in agricultural systems. This study investigates the effectiveness of utilizing different plant recognition algorithms applied to UAV-derived images for evaluating seeder performance based on detected plant spacings. Additionally, it examines the impact of seeding unit vibrations on seeding quality by analyzing accelerometer data installed on the seeder. For the image analysis, three plant recognition approaches were tested: an unsupervised segmentation method based on the Visible Atmospherically Resistant Index (VARI), template matching (TM), and a deep learning model called Mask R-CNN. The Mask R-CNN model demonstrated the highest recognition reliability at 96.7%, excelling in detecting seeding errors such as misses and doubles, as well as in evaluating the quality of feed index and precision when compared to ground-truth data. Although the VARI-based unsupervised method and TM outperformed Mask R-CNN in recognizing double spacings, overall, the Mask R-CNN was the most promising. Vibration analysis indicated that the seeder’s working speed significantly affected seeding quality. These findings suggest areas for potential improvements in machine technology to improve sowing operations. Full article
Show Figures

Figure 1

Figure 1
<p>The illustration of the in-field activities and data processing procedures.</p>
Full article ">Figure 2
<p>The pipeline of the UAV orthomosaic creation followed by its patch-based division procedure.</p>
Full article ">Figure 3
<p>The proposed procedure for the identification of rows and plants within them.</p>
Full article ">Figure 4
<p>The efficiency of corn plant detection techniques is demonstrated on a portion of the patch.</p>
Full article ">Figure 5
<p>Distribution of plant spacing determined by the plant detection method. The distribution of GT spacings illustrates the disparity in plant detection accuracy among the chosen methods.</p>
Full article ">Figure 6
<p>Comparative effectiveness of various plant detection techniques using GT data as reference values: (<b>a</b>) absolute values included and the proportion of mis-spaced plants in the overall population (<b>b</b>).</p>
Full article ">Figure 7
<p>Plant spacing uniformity indicators with relative metrics.</p>
Full article ">Figure 8
<p>The expected number of plants compared to the number of detected plants for each row along a selected route. The blue line indicates the estimated plant numbers as calculated by the Mask R-CNN model, whereas the red line depicts the theoretical plant numbers.</p>
Full article ">Figure 9
<p>The expected number of plants vs. the number of detected plants per row level.</p>
Full article ">Figure 10
<p>Spectral densities of the seeding row unit vibrations excited by the working speed.</p>
Full article ">Figure 11
<p>The relationship between vibration parameters of the seeding row unit and seeding quality parameters for the ground-truth data (orange dots) and for the Mask R-CNN data (blue dots): RMS compared to Miss index (<b>a</b>), Multiple index (<b>c</b>), quality of feed index (<b>e</b>), and standard deviation (<b>g</b>) and peak-to-peak compared to the Miss index (<b>b</b>), Multiple index (<b>d</b>), quality of feed index (<b>f</b>), and standard deviation (<b>h</b>).</p>
Full article ">
27 pages, 2352 KiB  
Article
LEVIOSA: Natural Language-Based Uncrewed Aerial Vehicle Trajectory Generation
by Godwyll Aikins, Mawaba Pascal Dao, Koboyo Josias Moukpe, Thomas C. Eskridge and Kim-Doang Nguyen
Electronics 2024, 13(22), 4508; https://doi.org/10.3390/electronics13224508 - 17 Nov 2024
Viewed by 435
Abstract
This paper presents LEVIOSA, a novel framework for text- and speech-based uncrewed aerial vehicle (UAV) trajectory generation. By leveraging multimodal large language models (LLMs) to interpret natural language commands, the system converts text and audio inputs into executable flight paths for UAV swarms. [...] Read more.
This paper presents LEVIOSA, a novel framework for text- and speech-based uncrewed aerial vehicle (UAV) trajectory generation. By leveraging multimodal large language models (LLMs) to interpret natural language commands, the system converts text and audio inputs into executable flight paths for UAV swarms. The approach aims to simplify the complex task of multi-UAV trajectory generation, which has significant applications in fields such as search and rescue, agriculture, infrastructure inspection, and entertainment. The framework involves two key innovations: a multi-critic consensus mechanism to evaluate trajectory quality and a hierarchical prompt structuring for improved task execution. The innovations ensure fidelity to user goals. The framework integrates several multimodal LLMs for high-level planning, converting natural language inputs into 3D waypoints that guide UAV movements and per-UAV low-level controllers to control each UAV in executing its assigned 3D waypoint path based on the high-level plan. The methodology was tested on various trajectory types with promising accuracy, synchronization, and collision avoidance results. The findings pave the way for more intuitive human–robot interactions and advanced multi-UAV coordination. Full article
(This article belongs to the Collection Predictive and Learning Control in Engineering Applications)
Show Figures

Figure 1

Figure 1
<p>Our framework incorporates several LLMs to generate and refine drone waypoints based on user commands.</p>
Full article ">Figure 2
<p>Illustrative diagram of the components of the high-level planner system, showing the role of each LLM agent type, their inputs, and outputs. (<b>a</b>) Instructor agent. (<b>b</b>) Generator agent. (<b>c</b>) Critic agents. (<b>d</b>) Aggregator agent.</p>
Full article ">Figure 3
<p>The overall trajectory is divided into individual waypoints for each drone. The waypoints, combined with each drone’s real-time observations, are then processed by the dedicated low-level policy for that UAV. The process generates the specific actions required to guide the drone’s movement.</p>
Full article ">Figure 4
<p>Sample Star generated based on Gemini.</p>
Full article ">Figure 5
<p>Sample Star generated based on GeminiFlash.</p>
Full article ">Figure 6
<p>Sample Star generated based on GPT-4o.</p>
Full article ">Figure 7
<p>Successful 5-petal flower trajectory generated by the Gemini model.</p>
Full article ">Figure 8
<p>Common failure mode of the Gemini model for petal flower geometries.</p>
Full article ">Figure 9
<p>A thousand drones successfully form parallel lines generated by Gemini.</p>
Full article ">Figure 10
<p>One hundred drones successfully form a spiral generated by Gemini.</p>
Full article ">Figure 11
<p>A thousand drones unsuccessfully form a dragon generated by Gemini.</p>
Full article ">
32 pages, 3323 KiB  
Systematic Review
Artificial Intelligence Applied to Support Agronomic Decisions for the Automatic Aerial Analysis Images Captured by UAV: A Systematic Review
by Josef Augusto Oberdan Souza Silva, Vilson Soares de Siqueira, Marcio Mesquita, Luís Sérgio Rodrigues Vale, Jhon Lennon Bezerra da Silva, Marcos Vinícius da Silva, João Paulo Barcelos Lemos, Lorena Nunes Lacerda, Rhuanito Soranz Ferrarezi and Henrique Fonseca Elias de Oliveira
Agronomy 2024, 14(11), 2697; https://doi.org/10.3390/agronomy14112697 - 15 Nov 2024
Viewed by 526
Abstract
Integrating advanced technologies such as artificial intelligence (AI) with traditional agricultural practices has changed how activities are developed in agriculture, with the aim of automating manual processes and improving the efficiency and quality of farming decisions. With the advent of deep learning models [...] Read more.
Integrating advanced technologies such as artificial intelligence (AI) with traditional agricultural practices has changed how activities are developed in agriculture, with the aim of automating manual processes and improving the efficiency and quality of farming decisions. With the advent of deep learning models such as convolutional neural network (CNN) and You Only Look Once (YOLO), many studies have emerged given the need to develop solutions to problems and take advantage of all the potential that this technology has to offer. This systematic literature review aims to present an in-depth investigation of the application of AI in supporting the management of weeds, plant nutrition, water, pests, and diseases. This systematic review was conducted using the PRISMA methodology and guidelines. Data from different papers indicated that the main research interests comprise five groups: (a) type of agronomic problems; (b) type of sensor; (c) dataset treatment; (d) evaluation metrics and quantification; and (e) AI technique. The inclusion (I) and exclusion (E) criteria adopted in this study included: (I1) articles that obtained AI techniques for agricultural analysis; (I2) complete articles written in English; (I3) articles from specialized scientific journals; (E1) articles that did not describe the type of agrarian analysis used; (E2) articles that did not specify the AI technique used and that were incomplete or abstract; (E3) articles that did not present substantial experimental results. The articles were searched on the official pages of the main scientific bases: ACM, IEEE, ScienceDirect, MDPI, and Web of Science. The papers were categorized and grouped to show the main contributions of the literature to support agricultural decisions using AI. This study found that AI methods perform better in supporting weed detection, classification of plant diseases, and estimation of agricultural yield in crops when using images captured by Unmanned Aerial Vehicles (UAVs). Furthermore, CNN and YOLO, as well as their variations, present the best results for all groups presented. This review also points out the limitations and potential challenges when working with deep machine learning models, aiming to contribute to knowledge systematization and to benefit researchers and professionals regarding AI applications in mitigating agronomic problems. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the systematic review selection steps according to the PRISMA methodology, according to the PRISMA 2020 statement from Page et al. [<a href="#B25-agronomy-14-02697" class="html-bibr">25</a>].</p>
Full article ">Figure 2
<p>Flowchart of the systematic literature review data extraction and sequence highlights, adapted from Siqueira et al. [<a href="#B23-agronomy-14-02697" class="html-bibr">23</a>]. Data extraction steps: (a) list of articles divided by the type of agronomic problem that each proposed to solve; (b) list of articles, divided by type of agronomic problem, that used sensors to acquire the dataset; (c) list of articles, divided by type of agronomic problem, that used image improvement techniques in the dataset; (d) number of articles that used evaluation metrics; (e) list of the main machine learning models used by each article in this study.</p>
Full article ">Figure 3
<p>Example of data output after training the YOLOv7 model for weed segmentation in commercial crops.</p>
Full article ">Figure 4
<p>Number of articles and timeline of publications per type of agronomic problems.</p>
Full article ">Figure 5
<p>Number of articles published and scientific platforms per type of agronomic problems.</p>
Full article ">Figure 6
<p>Number of articles per country included in this SLR.</p>
Full article ">
16 pages, 5643 KiB  
Article
Revolutionizing Palm Dates Harvesting with Multirotor Flying Vehicles
by Hanafy M. Omar and Saad M. S. Mukras
Appl. Sci. 2024, 14(22), 10529; https://doi.org/10.3390/app142210529 - 15 Nov 2024
Viewed by 297
Abstract
This study addresses the challenges of traditional date palm harvesting, which is often labor-intensive and hazardous, by introducing an innovative solution utilizing multirotor flying vehicles (MRFVs). Unlike conventional methods such as hydraulic lifts and ground-based robotic manipulators, the proposed system integrates a quadrotor [...] Read more.
This study addresses the challenges of traditional date palm harvesting, which is often labor-intensive and hazardous, by introducing an innovative solution utilizing multirotor flying vehicles (MRFVs). Unlike conventional methods such as hydraulic lifts and ground-based robotic manipulators, the proposed system integrates a quadrotor equipped with a winch and a suspended robotic arm with a precision saw. Controlled remotely via a mobile application, the quadrotor navigates to targeted branches on the date palm tree, where the robotic arm, guided by live video feedback from integrated cameras, accurately severs the branches. Extensive testing in a controlled environment demonstrates the system’s potential to significantly improve harvesting efficiency, safety, and cost-effectiveness. This approach offers a promising alternative to traditional harvesting methods, providing a scalable solution for date palm cultivation, particularly in regions with large-scale plantations. This work marks a significant advancement in the field of agricultural automation, offering a safer, more efficient method for harvesting date palms and contributing to the growing body of knowledge in automated farming technologies. Full article
Show Figures

Figure 1

Figure 1
<p>Harvesting date palms by climbing trees.</p>
Full article ">Figure 2
<p>Hydraulic lift for palm tree harvesting.</p>
Full article ">Figure 3
<p>Robotic arm for date harvesting [<a href="#B16-applsci-14-10529" class="html-bibr">16</a>].</p>
Full article ">Figure 4
<p>Developed system.</p>
Full article ">Figure 5
<p>Robotic arm.</p>
Full article ">Figure 6
<p>The designed winch.</p>
Full article ">Figure 7
<p>Top view of the designed quadrotor flying vehicle.</p>
Full article ">Figure 8
<p>Connections of the RPI fixed on the robotic arm.</p>
Full article ">Figure 9
<p>Connections of the RPI fixed on the quadrotor.</p>
Full article ">Figure 10
<p>Screenshot of the application main screen during the operation.</p>
Full article ">Figure 11
<p>Testing the system in the lab using the testbed.</p>
Full article ">Figure 12
<p>Quadrotor attitude angles.</p>
Full article ">Figure 13
<p>Quadrotor speed in the longitudinal direction.</p>
Full article ">Figure 14
<p>Quadrotor speed in the lateral direction.</p>
Full article ">Figure 15
<p>Quadrotor speed in the vertical direction.</p>
Full article ">
25 pages, 10324 KiB  
Article
Research for the Positioning Optimization for Portable Field Terrain Mapping Equipment Based on the Adaptive Unscented Kalman Filter Algorithm
by Jiaxing Xie, Zhenbang Yu, Gaotian Liang, Xianbing Fu, Peng Gao, Huili Yin, Daozong Sun, Weixing Wang, Yueju Xue, Jiyuan Shen and Jun Li
Remote Sens. 2024, 16(22), 4248; https://doi.org/10.3390/rs16224248 - 14 Nov 2024
Viewed by 296
Abstract
Field positioning (FP) is a key technique in the digitalization of agriculture. By integrating sensors and mapping techniques, FP can convey critical information such as soil quality, plant distribution, and topography. Utilizing vehicles for field applications provides precise control and scientific management for [...] Read more.
Field positioning (FP) is a key technique in the digitalization of agriculture. By integrating sensors and mapping techniques, FP can convey critical information such as soil quality, plant distribution, and topography. Utilizing vehicles for field applications provides precise control and scientific management for agricultural production. Compared to conventional methods, which often struggle with the complexities of field conditions and suffer from insufficient accuracy, this study employs a novel approach using self-developed multi-sensor array hardware as a portable field topographic surveying device. This innovative setup effectively navigates challenging field conditions to collect raw data. Data fusion is carried out using the Unscented Kalman Filter (UKF) algorithm. Building on this, this study combines the good point set and Opposition-based Differential Evolution for a joint improvement of the Slime Mould Algorithm. This is linked with the UKF algorithm to establish loss value feedback, realizing the adaptive parameter adjustment of the UKF algorithm. This reduces the workload of parameter setting and enhances the precision of data fusion. The improved algorithm optimizes parameters with an efficiency increase of 40.43%. Combining professional, mapping-grade total stations for accuracy comparison, the final test results show an absolute error of less than 0.3857 m, achieving decimeter-level precision in field positioning. This provides a new application technology for better implementation of agricultural digitalization. Full article
Show Figures

Figure 1

Figure 1
<p>Technical route.</p>
Full article ">Figure 2
<p>Aerial view of an experimental orchard.</p>
Full article ">Figure 3
<p>The portable multi-sensor array sampling hardware.</p>
Full article ">Figure 4
<p>Data collection procedure.</p>
Full article ">Figure 5
<p>Calculate the procedure of the AUKF algorithm.</p>
Full article ">Figure 6
<p>The ISMA process flowchart.</p>
Full article ">Figure 7
<p>Classical functions test results.</p>
Full article ">Figure 8
<p>The simulation of initial population generation.</p>
Full article ">Figure 9
<p>Opposition-based differential evolution reverse process flowchart.</p>
Full article ">Figure 10
<p>Loss value assessment comparison after process noise adaptive optimization.</p>
Full article ">Figure 11
<p>Loss value assessment comparison after global parameter adaptive optimization.</p>
Full article ">Figure 12
<p>The AUKF algorithm initial performance test.</p>
Full article ">Figure 13
<p>Time consumption of global parameters adaptive optimization.</p>
Full article ">Figure 14
<p>Large-scale sampling interpolation modeling results.</p>
Full article ">Figure 15
<p>Fixed-point precision test.</p>
Full article ">
23 pages, 12566 KiB  
Article
Multispectral Images for Drought Stress Evaluation of Arabica Coffee Genotypes Under Different Irrigation Regimes
by Patrícia Carvalho da Silva, Walter Quadros Ribeiro Junior, Maria Lucrecia Gerosa Ramos, Maurício Ferreira Lopes, Charles Cardoso Santana, Raphael Augusto das Chagas Noqueli Casari, Lemerson de Oliveira Brasileiro, Adriano Delly Veiga, Omar Cruz Rocha, Juaci Vitória Malaquias, Nara Oliveira Silva Souza and Henrique Llacer Roig
Sensors 2024, 24(22), 7271; https://doi.org/10.3390/s24227271 - 14 Nov 2024
Viewed by 431
Abstract
The advancement of digital agriculture combined with computational tools and Unmanned Aerial Vehicles (UAVs) has opened the way to large-scale data collection for the calculation of vegetation indices (VIs). These vegetation indexes (VIs) are useful for agricultural monitoring, as they highlight the inherent [...] Read more.
The advancement of digital agriculture combined with computational tools and Unmanned Aerial Vehicles (UAVs) has opened the way to large-scale data collection for the calculation of vegetation indices (VIs). These vegetation indexes (VIs) are useful for agricultural monitoring, as they highlight the inherent characteristics of vegetation and optimize the spatial and temporal evaluation of different crops. The experiment tested three coffee genotypes (Catuaí 62, E237 and Iapar 59) under five water regimes: (1) FI 100 (year-round irrigation with 100% replacement of evapotranspiration), (2) FI 50 (year-round irrigation with 50% evapotranspiration replacement), (3) WD 100 (no irrigation from June to September (dry season) and, thereafter, 100% evapotranspiration replacement), (4) WD 50 (no irrigation from June to September (water stress) and, thereafter, 50% evapotranspiration replacement) and (5) rainfed (no irrigation during the year). The irrigated treatments were watered with irrigation and precipitation. Most indices were highest in response to full irrigation (FI 100). The values of the NDVI ranged from 0.87 to 0.58 and the SAVI from 0.65 to 0.38, and the values of these indices were lowest for genotype E237 in the rainfed areas. The indices NDVI, OSAVI, MCARI, NDRE and GDVI were positively correlated very strongly with photosynthesis (A) and strongly with transpiration (E) of the coffee trees. On the other hand, temperature-based indices, such as canopy temperature and the TCARI index correlated negatively with A, E and stomatal conductance (gs). Under full irrigation, the tested genotypes did not differ between the years of evaluation. Overall, the index values of Iapar 59 exceeded those of the other genotypes. The use of VIs to evaluate coffee tree performance under different water managements proved efficient in discriminating the best genotypes and optimal water conditions for each genotype. Given the economic importance of coffee as a crop and its susceptibility to extreme events such as drought, this study provides insights that facilitate the optimization of productivity and resilience of plantations under variable climatic conditions. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

Figure 1
<p>Location and arrangement of treatments and genotypes in the study area at Embrapa Cerrados in Brasília, DF, in the Cerrado biome.</p>
Full article ">Figure 2
<p>Long-term maximum and minimum average climate data over the past 46 years (1974–2020) in the area.</p>
Full article ">Figure 3
<p>Vegetation indices (<b>A</b>) NDVI, (<b>B</b>) SAVI, (<b>C</b>) NDRE, (<b>D</b>) TCARI, (<b>E</b>) MCARI, (<b>F</b>) GNDVI, (<b>G</b>) MTCI and (<b>H</b>) GDVI, in 2019, for three coffee genotypes (Catuaí 62, E237 and Iapar 59) under five water regimes (FI 100%, FI 50%, WS 100%, WS 50% and Rainfed) during water stress in drought treatments in 2019. Means followed by the same capital letters compare water regimes for each coffee genotype, and lowercase letters compare coffee genotypes within each water regime.</p>
Full article ">Figure 4
<p>Vegetation indices (<b>A</b>) NDVI, (<b>B</b>) SAVI, (<b>C</b>) NDRE, (<b>D</b>) TCARI, (<b>E</b>) MCARI, (<b>F</b>) GNDVI, (<b>G</b>) MTCI and (<b>H</b>) GDVI, in 2020, for three coffee genotypes (Catuaí 62, E237 and Iapar 59) under five water regimes (FI 100%, FI 50%, WS 100%, WS 50% and Rainfed) during water stress in drought treatments in 2019. Means followed by the same capital letters compare water regimes for each coffee genotype, and lowercase letters compare coffee genotypes within each water regime.</p>
Full article ">Figure 5
<p>Vegetation indices NDVI, OSAVI, MCARI, TCARI, NDRE, GNDVI, GDVI and MTCI evaluated for three coffee genotypes (Catuaí 62, E237 and Iapar 59) in response to five water regimes (from left to right, rainfed; WD (water stress) 1, 50%; WD2 50%; FI (full irrigation) in 2019.</p>
Full article ">Figure 6
<p>Vegetation indices NDVI, OSAVI, MCARI, TCARI, NDRE, GNDVI, GDVI and MTCI evaluated for three coffee genotypes (Catuaí 62, E237 and Iapar 59) in response to five water regimes (from left to right, rainfed; WD (water stress) 1, 50%; WD2 50%; FI (full irrigation) in 2020.</p>
Full article ">Figure 7
<p>Correlogram of Pearson’s correlation between vegetation indices and leaf gas exchange in 2019. A: photosynthesis, gs: stomatal conductance, E: transpiration and T: canopy temperature °C. Values close to 1 indicate a strong positive correlation, while values close to −1 represent a strong negative correlation. Values close to 0 suggest an absence of significant linear relationship. In addition, the size of the source observed in the corelogram was used to graphically represent the magnitude of the correlation, allowing a clear visualization of the strength of the associations. Stronger correlations (positive or negative) were highlighted with larger font size and bold, while weaker correlations were presented with smaller and unbold fonts. The statistical significance of the correlations was evaluated at a probability level: * significant at <span class="html-italic">p</span> &lt; 0.05, ** significant at <span class="html-italic">p</span> &lt; 0.01, *** significant at <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Figure 8
<p>Correlogram of Pearson’s correlation between vegetation indices and leaf gas exchange in 2020. A: photosynthesis, gs: stomatal conductance, E: transpiration and T: canopy temperature °C. Values close to 1 indicate a strong positive correlation, while values close to −1 represent a strong negative correlation. Values close to 0 suggest an absence of significant linear relationship. In addition, the size of the source observed in the correlogram was used to graphically represent the magnitude of the correlation, allowing a clear visualization of the strength of the associations. Stronger correlations (positive or negative) were highlighted with larger font size and bold, while weaker correlations were presented with smaller and unbold fonts. The statistical significance of the correlations was evaluated at a probability level: * significant at <span class="html-italic">p</span> &lt; 0.05, ** significant at <span class="html-italic">p</span> &lt; 0.01, *** significant at <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Figure 9
<p>Exploratory analysis of the principal components for vegetation indices and coffee tree physiology and productivity in response to different water regimes and of different genotypes in the growing seasons of 2019 (<b>A</b>,<b>B</b>) and 2020 (<b>C</b>,<b>D</b>). The vectors of the variables projected onto the graphs indicate the magnitude and direction of their contribution to the separation between groups. The length of the vectors reflects the intensity of their influence on the principal components, while their orientation highlights the multivariate differences between the evaluated conditions. The ellipses drawn around the experimental groups represent the intragroup dispersion based on the covariance of the data. The center of each ellipse corresponds to the centroid of the respective group, representing the average position of the observations. The orientation and size of the ellipses indicate the internal variability of each group: more compact ellipses suggest greater homogeneity within the observations. In contrast, larger ellipses indicate greater heterogeneity, possibly associated with the different conditions imposed by the treatments. The distribution of the vectors and the separation of the ellipses demonstrate that the analyzed variables play a significant role in differentiating between the groups of cultivars and irrigation regimes, providing insights into the physiological and spectral responses under each evaluated scenario.</p>
Full article ">
23 pages, 2756 KiB  
Review
A Review of Drone Technology and Operation Processes in Agricultural Crop Spraying
by Argelia García-Munguía, Paloma Lucía Guerra-Ávila, Efraín Islas-Ojeda, Jorge Luis Flores-Sánchez, Otilio Vázquez-Martínez, Alberto Margarito García-Munguía and Otilio García-Munguía
Drones 2024, 8(11), 674; https://doi.org/10.3390/drones8110674 - 14 Nov 2024
Viewed by 1004
Abstract
Precision agriculture is revolutionizing the management and production of agricultural crops. The development of new technologies in agriculture, such as unmanned aerial vehicles (UAVs), has proven to be an efficient option for spraying various compounds on crops. UAVs significantly contribute to enhancing precision [...] Read more.
Precision agriculture is revolutionizing the management and production of agricultural crops. The development of new technologies in agriculture, such as unmanned aerial vehicles (UAVs), has proven to be an efficient option for spraying various compounds on crops. UAVs significantly contribute to enhancing precision agriculture. This review aims to determine whether integrating advanced precision technologies into drones for crop spraying enhances spraying accuracy compared to drones utilizing standard spraying technologies. To achieve this, 100 articles published between 2019 and 2024 were selected and analyzed. The information was summarized into five main areas: (1) improved spraying with agricultural drone technologies, (2) operational parameters, (3) spraying applications of chemical and natural compounds with agricultural drones, (4) evaluations of control pest efficacy, and (5) considerable limitations. Finally, considerations are presented on the advantages of drone technology with artificial intelligence (AI); the practical effects of reducing pesticides, which, in some cases, have reached a reduction of 30% compared to the recommended dose; and future directions for improving precision agriculture. The use of drones in precision agriculture presents technical and scientific challenges for the maximization of spraying efficiency and the minimization of agrochemical use. Full article
(This article belongs to the Special Issue Recent Advances in Crop Protection Using UAV and UGV)
Show Figures

Figure 1

Figure 1
<p>Methodology flow diagram.</p>
Full article ">Figure 2
<p>Steps involved in drone spraying operation.</p>
Full article ">Figure 3
<p>Spray coverage.</p>
Full article ">
25 pages, 43161 KiB  
Article
Mamba-UAV-SegNet: A Multi-Scale Adaptive Feature Fusion Network for Real-Time Semantic Segmentation of UAV Aerial Imagery
by Longyang Huang, Jintao Tan and Zhonghui Chen
Drones 2024, 8(11), 671; https://doi.org/10.3390/drones8110671 - 13 Nov 2024
Viewed by 563
Abstract
Accurate semantic segmentation of high-resolution images captured by unmanned aerial vehicles (UAVs) is crucial for applications in environmental monitoring, urban planning, and precision agriculture. However, challenges such as class imbalance, small-object detection, and intricate boundary details complicate the analysis of UAV imagery. To [...] Read more.
Accurate semantic segmentation of high-resolution images captured by unmanned aerial vehicles (UAVs) is crucial for applications in environmental monitoring, urban planning, and precision agriculture. However, challenges such as class imbalance, small-object detection, and intricate boundary details complicate the analysis of UAV imagery. To address these issues, we propose Mamba-UAV-SegNet, a novel real-time semantic segmentation network specifically designed for UAV images. The network integrates a Multi-Head Mamba Block (MH-Mamba Block) for enhanced multi-scale feature representation, an Adaptive Boundary Enhancement Fusion Module (ABEFM) for improved boundary-aware feature fusion, and an edge-detail auxiliary training branch to capture fine-grained details. The practical utility of our method is demonstrated through its application to farmland segmentation. Extensive experiments on the UAV-City, VDD, and UAVid datasets show that our model outperforms state-of-the-art methods, achieving mean Intersection over Union (mIoU) scores of 71.2%, 77.5%, and 69.3%, respectively. Ablation studies confirm the effectiveness of each component and their combined contributions to overall performance. The proposed method balances segmentation accuracy and computational efficiency, maintaining real-time inference speeds suitable for practical UAV applications. Full article
Show Figures

Figure 1

Figure 1
<p>Overall architecture of Mamba-UAV-SegNet. The network consists of four stages within the STDC backbone, corresponding to different levels of feature extraction. The MH-Mamba Block is applied to feature maps from Stage 2 and Stage 4. The Adaptive Boundary Enhancement Fusion Module (ABEFM) fuses features processed by the MH-Mamba Block. The gray block represents the edge detail auxiliary training branch.</p>
Full article ">Figure 2
<p>Architecture of the MH-Mamba Block. This module integrates multi-scale convolutions and a Multi-Head 2D State Space Model (Multi-Head 2D-SSM) for enhanced feature extraction. Multi-scale convolutions capture local features, while the Multi-Head 2D-SSM applies four directional scans to model long-range dependencies. The outputs are concatenated and compressed, with Adaptive Feature Fusion selectively enhancing critical features for improved representation.</p>
Full article ">Figure 3
<p>Adaptive boundary enhancement fusion module architecture diagram.</p>
Full article ">Figure 4
<p>Visualization results on the UAVid dataset.</p>
Full article ">Figure 5
<p>Visualization results on the VDD dataset.</p>
Full article ">Figure 6
<p>Visualization of low-level feature maps.</p>
Full article ">Figure 7
<p>Grad-CAM visualization for lake class.</p>
Full article ">Figure 8
<p>Feature fusion and visualization of feature maps.</p>
Full article ">Figure 9
<p>Application of Mamba-UAV-SegNet in agricultural land segmentation. (<b>Top</b>) UAV equipped for aerial imaging. (<b>Bottom left</b>) Raw aerial image of farmland. (<b>Bottom right</b>) Semantic segmentation output produced by Mamba-UAV-SegNet. The segmentation map distinguishes various field features, demonstrating the model’s effectiveness in mapping agricultural landscapes.</p>
Full article ">
21 pages, 7841 KiB  
Article
Research on a Method for Measuring the Pile Height of Materials in Agricultural Product Transport Vehicles Based on Binocular Vision
by Wang Qian, Pengyong Wang, Hongjie Wang, Shuqin Wu, Yang Hao, Xiaoou Zhang, Xinyu Wang, Wenyan Sun, Haijie Guo and Xin Guo
Sensors 2024, 24(22), 7204; https://doi.org/10.3390/s24227204 - 11 Nov 2024
Viewed by 435
Abstract
The advancement of unloading technology in combine harvesting is crucial for the intelligent development of agricultural machinery. Accurately measuring material pile height in transport vehicles is essential, as uneven accumulation can lead to spillage and voids, reducing loading efficiency. Relying solely on manual [...] Read more.
The advancement of unloading technology in combine harvesting is crucial for the intelligent development of agricultural machinery. Accurately measuring material pile height in transport vehicles is essential, as uneven accumulation can lead to spillage and voids, reducing loading efficiency. Relying solely on manual observation for measuring stack height can decrease harvesting efficiency and pose safety risks due to driver distraction. This research applies binocular vision to agricultural harvesting, proposing a novel method that uses a stereo matching algorithm to measure material pile height during harvesting. By comparing distance measurements taken in both empty and loaded states, the method determines stack height. A linear regression model processes the stack height data, enhancing measurement accuracy. A binocular vision system was established, applying Zhang’s calibration method on the MATLAB (R2019a) platform to correct camera parameters, achieving a calibration error of 0.15 pixels. The study implemented block matching (BM) and semi-global block matching (SGBM) algorithms using the OpenCV (4.8.1) library on the PyCharm (2020.3.5) platform for stereo matching, generating disparity, and pseudo-color maps. Three-dimensional coordinates of key points on the piled material were calculated to measure distances from the vehicle container bottom and material surface to the binocular camera, allowing for the calculation of material pile height. Furthermore, a linear regression model was applied to correct the data, enhancing the accuracy of the measured pile height. The results indicate that by employing binocular stereo vision and stereo matching algorithms, followed by linear regression, this method can accurately calculate material pile height. The average relative error for the BM algorithm was 3.70%, and for the SGBM algorithm, it was 3.35%, both within the acceptable precision range. While the SGBM algorithm was, on average, 46 ms slower than the BM algorithm, both maintained errors under 7% and computation times under 100 ms, meeting the real-time measurement requirements for combine harvesting. In practical operations, this method can effectively measure material pile height in transport vehicles. The choice of matching algorithm should consider container size, material properties, and the balance between measurement time, accuracy, and disparity map completeness. This approach aids in manual adjustment of machinery posture and provides data support for future autonomous master-slave collaborative operations in combine harvesting. Full article
(This article belongs to the Special Issue AI, IoT and Smart Sensors for Precision Agriculture)
Show Figures

Figure 1

Figure 1
<p>Principle of triangulation.</p>
Full article ">Figure 2
<p>Zhang’ calibration steps.</p>
Full article ">Figure 3
<p>The corner extraction results of the checkerboard. (<b>a</b>) Calibration paper; (<b>b</b>) Calibration plate.</p>
Full article ">Figure 4
<p>The relative position between the binocular camera and the calibration board. (<b>a</b>) Calibration paper; (<b>b</b>) calibration plate.</p>
Full article ">Figure 5
<p>The reprojection errors of the chessboard calibration. (<b>a</b>) Calibration paper; (<b>b</b>) calibration plate.</p>
Full article ">Figure 6
<p>Polar correction. (<b>a</b>) Before correction; (<b>b</b>) after correction.</p>
Full article ">Figure 6 Cont.
<p>Polar correction. (<b>a</b>) Before correction; (<b>b</b>) after correction.</p>
Full article ">Figure 7
<p>Basic workflow of stereo matching.</p>
Full article ">Figure 8
<p>Method for measuring the height of piled materials.</p>
Full article ">Figure 9
<p>The process of measuring the piled height of potatoes.</p>
Full article ">Figure 10
<p>The image under no-load conditions. (<b>a</b>) Left image; (<b>b</b>) right image; (<b>c</b>) BM disparity map; (<b>d</b>) BM pseudo-color map; (<b>e</b>) SGBM disparity map; (<b>f</b>) SGBM pseudo-color map.</p>
Full article ">Figure 11
<p>Images of three different load conditions.</p>
Full article ">Figure 11 Cont.
<p>Images of three different load conditions.</p>
Full article ">Figure 12
<p>The distance measurement results between the surface of stacked potatoes and the stereo camera under three different conditions. (<b>a</b>) State 1; (<b>b</b>) state 2; (<b>c</b>) state 3.</p>
Full article ">Figure 12 Cont.
<p>The distance measurement results between the surface of stacked potatoes and the stereo camera under three different conditions. (<b>a</b>) State 1; (<b>b</b>) state 2; (<b>c</b>) state 3.</p>
Full article ">Figure 13
<p>Regression model and evaluation metrics. (<b>a</b>) BM measurement values and calibrated values; (<b>b</b>) SGBM measurement values and calibrated values; (<b>c</b>) residual plot of the BM regression model; (<b>d</b>) residual plot of the SGBM regression model.</p>
Full article ">Figure 13 Cont.
<p>Regression model and evaluation metrics. (<b>a</b>) BM measurement values and calibrated values; (<b>b</b>) SGBM measurement values and calibrated values; (<b>c</b>) residual plot of the BM regression model; (<b>d</b>) residual plot of the SGBM regression model.</p>
Full article ">Figure 14
<p>Comparison of pile heights and errors before and after calibration.</p>
Full article ">
25 pages, 4811 KiB  
Review
Transforming Farming: A Review of AI-Powered UAV Technologies in Precision Agriculture
by Juhi Agrawal and Muhammad Yeasir Arafat
Drones 2024, 8(11), 664; https://doi.org/10.3390/drones8110664 - 10 Nov 2024
Viewed by 839
Abstract
The integration of unmanned aerial vehicles (UAVs) with artificial intelligence (AI) and machine learning (ML) has fundamentally transformed precision agriculture by enhancing efficiency, sustainability, and data-driven decision making. In this paper, we present a comprehensive overview of the integration of multispectral, hyperspectral, and [...] Read more.
The integration of unmanned aerial vehicles (UAVs) with artificial intelligence (AI) and machine learning (ML) has fundamentally transformed precision agriculture by enhancing efficiency, sustainability, and data-driven decision making. In this paper, we present a comprehensive overview of the integration of multispectral, hyperspectral, and thermal sensors mounted on drones with AI-driven algorithms to transform modern farms. Such technologies support crop health monitoring in real time, resource management, and automated decision making, thus improving productivity with considerably reduced resource consumption. However, limitations include high costs of operation, limited UAV battery life, and the need for highly trained operators. The novelty of this study lies in the thorough analysis and comparison of all UAV-AI integration research, along with an overview of existing related works and an analysis of the gaps. Furthermore, practical solutions to technological challenges are summarized to provide insights into precision agriculture. This paper also discusses the barriers to UAV adoption and suggests practical solutions to overcome existing limitations. Finally, this paper outlines future research directions, which will discuss advances in sensor technology, energy-efficient AI models, and how these aspects influence ethical considerations regarding the use of UAVs in agricultural research. Full article
Show Figures

Figure 1

Figure 1
<p>Using sensors, UAVs, and AI technology for precision farming.</p>
Full article ">Figure 2
<p>Outline of the paper.</p>
Full article ">Figure 3
<p>A variety of UAVs are used in agriculture.</p>
Full article ">Figure 4
<p>Applications of UAVs in precision agriculture.</p>
Full article ">Figure 5
<p>Workflow for integration of UAVs, sensors, and AI in precision farming.</p>
Full article ">
Back to TopTop