Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Torque Characteristics Analysis of Slotted-Type Axial-Flux Magnetic Coupler in the Misalignment State
Previous Article in Journal
Dynamics and Control of UAVs
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Unmanned Ground Vehicles for Continuous Crop Monitoring in Agriculture: Assessing the Readiness of Current ICT Technology

by
Maurizio Agelli
,
Nicola Corona
*,
Fabio Maggio
and
Paolo Vincenzo Moi
*
CRS4, Center for Advanced Studies, Research and Development in Sardinia, Loc. Piscina Manna ed. 1, 09050 Pula, Italy
*
Authors to whom correspondence should be addressed.
Machines 2024, 12(11), 750; https://doi.org/10.3390/machines12110750
Submission received: 31 July 2024 / Revised: 25 September 2024 / Accepted: 10 October 2024 / Published: 23 October 2024
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Figure 1
<p>On the left, the bibliographic dataset we collected, categorized by topic (DT stands for ‘digital twin’). The sum of works in each category exceeds the total entries, as resources may belong to multiple groups. On the right, the distribution of publications over time.</p> ">
Figure 2
<p>On the left: a typical robot development platform for agricultural monitoring. On the right: a simplified diagram of the main monitoring and navigation components (dimensions are in mm). Sensors are typically mounted on an external frame atop the UGV, offering a ‘human-like’ perspective and easy access to the devices. In many cases, two or more cameras are installed, facing the left and right crops. The actual number and placement of devices on the UGV may vary; the diagram is for conceptual purposes only. The image and diagram (modified by the authors) were taken from [<a href="#B24-machines-12-00750" class="html-bibr">24</a>] with preliminary authorization.</p> ">
Figure 3
<p>Dataflow architecture of UGVs for agricultural monitoring. The boxes on the right (‘Field Operations Controllers’ and ‘Actuators’) represent the potential integration of field treatment functionalities, an aspect that is beyond the scope of this review.</p> ">
Figure 4
<p>Prevalence and distribution of hardware ICT components for monitoring UGVs. This study examines the number of scientific publications from the last 10 years, as investigated using Google Scholar. On the left: sensors. On the right: computational devices (CPUs are ubiquitous and therefore are not included in the statistics).</p> ">
Versions Notes

Abstract

:
Continuous crop monitoring enables the early detection of field emergencies such as pests, diseases, and nutritional deficits, allowing for less invasive interventions and yielding economic, environmental, and health benefits. The work organization of modern agriculture, however, is not compatible with continuous human monitoring. ICT can facilitate this process using autonomous Unmanned Ground Vehicles (UGVs) to navigate crops, detect issues, georeference them, and report to human experts in real time. This review evaluates the current state of ICT technology to determine if it supports autonomous, continuous crop monitoring. The focus is on shifting from traditional cloud-based approaches, where data are sent to remote computers for deferred processing, to a hybrid design emphasizing edge computing for real-time analysis in the field. Key aspects considered include algorithms for in-field navigation, AIoT models for detecting agricultural emergencies, and advanced edge devices that are capable of managing sensors, collecting data, performing real-time deep learning inference, ensuring precise mapping and navigation, and sending alert reports with minimal human intervention. State-of-the-art research and development in this field suggest that general, not necessarily crop-specific, prototypes of fully autonomous UGVs for continuous monitoring are now at hand. Additionally, the demand for low-power consumption and affordable solutions can be practically addressed.

1. Introduction

The early detection of agricultural issues, such as pest infestations, diseases, or nutrient deficiencies, is of paramount significance from practical, economic, and environmental standpoints. Timely interventions in response to such issues can mitigate or prevent the associated damage, minimizing potential losses. Additionally, prompt intervention facilitates the adoption of less intrusive approaches, conserving resources and alleviating environmental pressures—a pivotal objective aligned with initiatives like the European Green Deal, which emphasizes the transition towards sustainability and the mitigation of climate change impacts. In the context of agricultural advancements, Information and Communication Technology (ICT) is a valuable tool for the future. Three key technologies, in particular, are widely acknowledged as game-changers—big data, artificial intelligence (AI), and robotics—which are all closely interconnected. Robotic systems play multiple roles in smart digital farming, including treatments in fields and continuous crop monitoring. This review concentrates on the latter aspect—crop surveillance. The underlying principle is to use autonomous rovers, or unmanned ground vehicles (UGVs), to mimic what a person would do. UGVs systematically traverse through the cultivation area, meticulously checking plant health on both sides and pausing to investigate anomalies such as suspected lesions or discoloration in foliage. If initial assessments fail to confirm an emergency, no further action is taken. However, upon validation of an issue, precise details regarding its location and nature are meticulously documented.
UGVs offer capabilities beyond mere observation, like the capacity to systematically assess the health and developmental status of all plants within a crop, thus enabling the generation of prescription maps for subsequent precision treatments, where distinct actions are tailored to specific areas of the crop. Furthermore, robots are capable of generating digital twins of crops with low-density patterns, such as orchards, olive groves, and vineyards. For instance, the 3D representation of a tree canopy as a surface grid composed of triangles that potentially evolve over time allows for the straightforward calculation of physical quantities such as plant dimensions, canopy volume, and leaf area. These parameters provide valuable information for planning interventions, optimizing agricultural practices, and evaluating the amount of forthcoming production. Such functionality can also be extended to crops with medium-density patterns, as observed in certain horticultural scenarios. UGVs could also be an effective way to create systematic agrometeorological datasets, thanks to onboard sensors measuring temperature, humidity, and other relevant parameters: the latter can feed suitable risk models, which are useful to anticipate the probability and severity of future weather-related adversities in agricultural contexts. Remarkably, thanks to UGVs, these operations can be conducted continuously, and in some cases, even during nighttime, irrespective of weather temperature, labor availability, or vacation days. While rovers designed for field treatment necessitate higher power capabilities to accommodate tanks filled with active ingredients or operate pumps and other tools, UGVs dedicated solely to monitoring have a limited size and weight; hence, they are well-suited for all-electric implementations, offering advantages in terms of carbon mitigation, particularly when integrated with renewable energy sources.
UGVs and drones (UAVs, unmanned aerial vehicles) should not be perceived as mutually exclusive alternatives, but rather as complementary technologies. UAVs possess distinct benefits, notably the capability to monitor crops, including those inaccessible to ground rovers, at close range without causing damage to plants. Additionally, they exhibit much greater speed compared to UGVs, although this characteristic may be constrained by the processing capabilities of onboard sensors and computational devices. Moreover, their navigation is not impeded by plant rows or ground structures within the field. On the other hand, there are also significant advantages to using UGVs. They are easier to equip with autonomous navigation systems, possess extended autonomy lasting several hours compared to the tens of minutes typically seen with drones, offer closer-proximity views of plants ranging from soil to human eye level, can accommodate heavier and more advanced sensors, computational devices, and AI models, and are generally subject to less restrictive regulations. When selecting the appropriate solution for monitoring and data collection in agriculture, the primary consideration is the plant density of the crop. On one end of the spectrum are tree crops characterized by typical planting layouts featuring rows and plants spaced meters apart. These conditions are highly favorable for UGVs, which can efficiently navigate the entire field. Conversely, crops such as wheat and rice exhibit densities often exceeding 250 plants per square meter. Traditionally, this scenario is managed using specialized machinery, such as tractors equipped with wide-spraying booms spanning tens of meters, aimed at minimizing damage while covering expansive field areas. UGVs are not suited for this context, which is better addressed by UAVs or “sensorized” machinery, i.e., standard tractors and other equipment outfitted with sensors, computational devices, and deterministic or AI algorithms. Such sensorized gears are beyond the scope of this review. In between these extremes lie horticultural crops, characterized by varying plant layouts and accessibility conditions that evolve significantly over time due to plant growth. Each case within this category deserves individual consideration; often, strategies such as navigating between rows during the early stages of plant development and exploiting dedicated lanes for later inspections are employed. UAVs can complement rovers by providing the aerial perspectives necessary for comprehensive 3D plant representations, even for orchards, vineyards, and olive groves. Additionally, they can generate coarse maps of vegetation indices or other pertinent information to facilitate targeted inspections by UGVs. Beyond drones, remote sensing technology may provide valuable information through satellite imagery. However, for precision farming applications, such images are constrained by a low spatial resolution, typically around 10 m for publicly available datasets like those from Copernicus. While commercial satellites offer higher resolutions of up to 0.30 m, their service policies often pose challenges for agricultural applications: high-resolution images typically come at a cost ranging from EUR 50 to 70 per square kilometer (1 km2 = 100 hectares), and for such applications, there is the impractical requirement of purchasing images covering areas of 25 km2 or larger.
Based on the considerations above, one could argue that the “best” UGV is one with a minimal size, to optimize the accessibility of plants within the crop. Furthermore, small rovers are more “gentle” with the crop soil, as they exert less pressure (if L is the UGV’s typical linear size, weight increases as L3 while the wheels/track size increases as L2). These points need to be well balanced with other practical constraints. First, a monitoring-focused rover requires stability to prevent the capture of blurred images and noisy measurements, particularly when sensors are positioned at a distance from the ground. Given that soil surfaces are often rugged, heavy vehicles equipped with suitable suspension systems demonstrate superior performance in this regard. Secondly, adequately sized batteries are essential to power the engine, high-performance sensors, computing devices (particularly for AI inference), and communication equipment. These batteries, despite being of medium size, contribute significantly to the weight of the rover. Last, appropriately dimensioned rovers exhibit improved traction and achieve higher maximum speeds on soft soils, especially those that are wet, sloping, and tilled.
Two primary classes of rovers exist: those adopting a general-purpose approach capable of handling diverse crops and situations, and those specifically engineered for particular scenarios, such as a single crop with a predetermined planting layout typically found in flat soils with smooth surfaces. Operating within such controlled environments enables perfomance optimization but may sacrifice generality. In both cases, all-electric engines emerge as the preferable choice due to their ability to reduce the environmental impacts and achieve more compact mechanical designs, and the lesser importance of high speeds. While mechanized farming treatments can occur at speeds of 15 km/h or higher, detailed monitoring involving real-time data acquisition and AI inference is typically conducted at lower speeds. This is also due to practical constraints, such as the risk of image quality degradation, particularly on uneven terrain. What is the surveillance capability of a UGV like the one described above? To illustrate, let us examine two distinct scenarios: a vineyard employing a typical guyot system and an artichoke crop with planting layouts of 2 m × 1 m and 1 m × 1 m, respectively (measures refer to the distance between rows and the distance between plants on the same row, respectively). These correspond to linear developments of 5 km/ha and 10 km/ha (1 ha = 10,000 m2). At a reference speed of 3.6 km/h (1 m/s), introducing a slow-down factor of 2 for the time spent on stops for detailed inspections, u-turns between inter-row aisles, etc., assuming a “working day” of 8 h for the UGV, this yields a monitoring capacity of 2.8 ha/day for the vineyard and 1.4 ha/day for the artichoke crop. Now, taking into account a revisit time, that is, the interval between two successive observations of the same point in the crop, of 4 days, it is roughly estimated that a single rover can autonomously surveil 12 ha of a vineyard or 6 ha of an artichoke crop. Needless to say, the profitability of these outcomes significantly depends on thorough considerations of the costs and benefits for each specific situation, especially concerning common crop emergencies, the resources required to address them, and the potential savings that early diagnosis could yield.
While several commercial UGVs exist for field operations like seeding, weeding, and pruning—each with its own limitations—UGVs designed exclusively for monitoring are far less common. Although the former can also be used for data acquisition and emergency detection, UGVs explicitly designed for crop surveillance are preferable for several reasons: they are generally smaller, lighter, less power-demanding, and more affordable, reducing navigation difficulties and minimizing unnecessary energy and cost; they can be more easily implemented as fully autonomous vehicles; they pose a lower risk of damaging crops or causing harm to people; their focus on ICT-oriented solutions enables advanced functions, such as edge-based image classification for the real-time diagnosis of pests and diseases, compared to traditional UGVs that are more focused on field operations; their deployment in fields is more practical. In addition, UGVs for monitoring have the potential advantage of addressing a wider range of situations compared to UGVs designed for crop treatments, which are tailored to specific tasks or types of cultivation. In fact, while the specialization of the latter is primarily achieved by installing specific mechanical tools (such as nozzles for spraying or scissors for cutting), the former can address different applications (e.g., monitoring various crops) through software. This is achieved by selecting or calibrating different AI models and algorithms to suit the situation at hand.
During our research, we did not find any ‘turnkey’ commercial UGV that meets all the criteria outlined above. For these reasons, and given the clear advantages of open solutions over proprietary market models, we favored robotic prototype architectures—specifically those where expert manufacturers handle the design and construction of the mechatronic platform, while sensors and computational devices are selected and installed through a decision process that is shared with the customer. This approach allows research groups focused primarily on AI and computer science, rather than robotics, to enter this field of study. These prototypes, typically costing between EUR 20 and 40 k (depending on the required sensors and hardware specifications), offer full customization and the flexibility to test various ICT setups, helping to identify the most suitable configurations. Therefore, in the following sections, the emphasis will be on cutting-edge ICT solutions that can be integrated to design a state-of-the-art UGV for crop monitoring, rather than on the existing all-in-one proprietary models currently available on the market.

2. Background and Related Literature

While debates and discussions about the potential realization of autonomous rovers date back a long time, we are interested in recent works addressing the real applications and practical implementations of UGVs. Alatise et al. [1] analyze Autonomous Mobile Robots (AMRs) across various sectors, including agriculture, providing a useful introduction to common issues these machines face: locomotion, perception, and navigation. They also examine future challenges related to path planning, localization, and obstacle avoidance, as well as sensor technologies, with a special focus on fusion algorithms. Interestingly, they introduce a relevant question for agriculture: the use of artificial landmarks. Their work serves as an introductory resource to the world of recent UGVs for agriculture, clarifying common concepts and ideas. In [2], the authors focus on an often underrated but crucial aspect of the implementation of UGVs in real-world field activities: stability. They examine various soil types commonly encountered in everyday farming practices and the mechanical damping systems UGVs require to navigate them. The authors introduce stability criteria for both static and dynamic cases and provide a comprehensive list of agricultural UGVs, analyzing their suitability for different soil conditions. In [3], the focus is on both the automation of conventional vehicles—an area not covered in this review—and specifically designed UGVs. The authors examine wheeled robots, including wheel-legged variants, and focus on common steering and driving systems, discussing their advantages and drawbacks. Many real robot examples are considered. They provide a highly illustrative table of key features to consider when evaluating the suitability of an agricultural UGV for a specific real-world context: dimensions, weight, speed, position accuracy, clearance and track width, and autonomy. A review of collision avoidance methods for mobile robots is presented in [4]. While the focus is on wheeled machines, many of the outlined results can be extended to generic UGVs in agriculture. Obstacle detection is analyzed in detail, with a comprehensive list being provided of the most common sensors used for this purpose. The subsequent step, collision avoidance, is considered in its various implementations, including potential fields, population-based search algorithms, neural networks, and fuzzy logic. Most reviews of agricultural UGVs share a focus on crop monitoring and plant treatments. Mahmud et al. [5] study the application of robots to field activities such as planting, spraying, and harvesting, and specifically, crop inspection—the focus of our review. They introduce computer vision and machine learning as key technologies for the early detection of diseases in both greenhouses and open fields. The authors also provide a valuable list of vision schemes used in target detection, detailing their functions, advantages, and disadvantages. In the real farming world, the end-users are often small-scale producers with limited economic resources for digitalizing their work compared to large industrial-scale farms. Nevertheless, ICT transition is still possible, even for this form of agriculture, as illustrated in [6]. While the general landscape of ICT technology is considered, without the study being limited to robotics, this work is notable for the vision the authors provide, as well as its detailed review of both the technical and socio-economic points of view. A highly recommended review is [7], which is primarily focused on the use of UGVs for agricultural mapping. In addition to hardware (robots and sensors), localization and mapping, path planning and exploration, and machine learning and computer vision, the review also addresses significant pre- and post-processing challenges, such as handling outliers and noise from the high-resolution sensors used for 3D point cloud and plant model generation. Notably, the authors highlight edge computing as an alternative to cloud computing for local data analysis and inference using low-power/energy-efficient computational devices. While the paper dates from early 2023, the rapid evolution of this technological field means that some of the presented solutions may already have been surpassed by current technologies. The review concludes with a brief section dedicated to the use of AI algorithms for data classification and regression in agriculture. Given the breadth and structure of this topic, dedicated reviews like [8] are recommended for further exploration. In [9], several technological pillars of smart agriculture are identified, aligning with a common vision shared by domain experts: the Internet of Things (IoT), artificial intelligence (AI), mobile robotics, and big data. In applications like farming, the first two are so intertwined that a dedicated acronym, AIoT, has been introduced. Among mobile platforms, the review focuses on an area we believe will see significant development in the near future: the collaboration between UGVs and drones. As with previous, still recent, works, the list of commercial technological solutions needs careful consideration, as newer and more advanced versions of similar devices are continually emerging. Control methods are one of the key topics in [10], where several crucial issues are identified: wireless communication, soil effects on UGV traction, terramechanics (interaction between robots and soil), and environment recognition. The review is extensive and comprehensive, addressing many aspects of the dynamics of the use of autonomous rovers for agriculture. Particular mention is given to (i) vehicle motion models and (ii) logic and control systems. Additionally, attention is paid to computer vision algorithms that are rarely used in this field, such as the use of Hough transform for plant row detection, although their effectiveness in partially structured environments like crops needs verification for specific cases. Data communication is another valuable topic in this review, which compares different standards along with their strengths and weaknesses. In [11], a general analysis of the benefits and challenges of automation in agriculture is presented. Of particular interest is the discussion of the use of UGVs for data collection and analysis, highlighting the innovative concepts of digital twins and “new ecologies of automation”. A crucial part of the analysis regards the feasibility of automation, including the inevitable economic shifts it will bring, as well as the advantages and disadvantages of adopting these innovative technologies. In our view, these are the most noteworthy recent reviews of the use of UGVs in agriculture. They provide a foundation for introducing basic concepts and paradigms, facilitating the formulation and development of new ideas. We will not repeat common topics that have already been extensively covered in the literature, such as the classification of UGVs based on their steering geometry or the fundamentals of robot navigation; for these, we refer to the aforementioned surveys. Our purpose is twofold: (i) to address topics that are less frequently considered, either because they require a multidisciplinary approach across different areas of ICT or involve cutting-edge technology that was introduced very recently, and (ii) to provide a useful tool for supporting the decisions of potential UGV buyers or builders, helping them make well-informed choices. Our paper is organized as follows. In Section 3, we describe the methodology used to create the two-level bibliographic dataset. In Section 4, we present the structure of a typical robotic platform for crop monitoring and describe the conceptual data flow between components, including optical and positioning sensors, computational modules, AI classifiers, and possible actuators. Section 5 focuses on sensors, analyzing recent trends in technology, with attention to different types of cameras, LiDARs, GNSSs, and IMUs. In Section 6, we illustrate the governing software, simulation environments, and programming frameworks, three key aspects of the design of these sophisticated machines. Section 7 covers the on-board hardware, including the latest developments in edge CPUs, GPUs, and accelerators for AI inference. The state-of-the-art in navigation, localization, and mapping is described in Section 8. Section 9 focuses on the use of algorithms, AI models, and digital twins for crops. Finally, Section 10 addresses the discussion and conclusions.

3. Methodology

A bibliographic database was collected using the following search engines: Scopus by Elsevier [12], Google Scholar [13], arXiv [14], IEEE Xplore [15], PubMed [16], Science.gov [17], ScienceDirect [18], Semantic Scholar [19], and WorldWideScience [20]. The research queries used were: “UGV” AND “agriculture”; “algorithm” AND “agriculture” AND “UGV”; “deep learning” AND “agriculture” AND “UGV”; “artificial intelligence” AND “agriculture” AND “UGV”; “digital twin” AND “agriculture” AND “UGV”; “software” AND “agriculture” AND “UGV”; “ROS” AND “agriculture” AND “UGV”; “operating systems” AND “agriculture” AND “UGV”; “simulation” AND “environment” AND “agriculture” AND “UGV”; “embedded” AND “system” AND “agriculture” AND “UGV”. After removing duplicates and outliers, further selection was conducted based on the following criteria: (i) only articles/reviews from scientific journals, book chapters, conference proceedings, electronic preprints available in open-access repositories such as arXiv, and technical web pages (including those from sensors/hardware manufacturers); (ii) English language only; (iii) open access only, with very few unavoidable exceptions; (iv) only resources from recent years (approximately the last decade; see Figure 1 right), with very few unavoidable exceptions. This resulted in a gross list of more than 1400 resources: the BibTeX version of the list is available upon request. Figure 1 displays some features of the dataset.
The largest category, ‘algorithms’, includes methods for (i) rover navigation in unstructured environments like crops, (ii) AI-based image and video classifiers, and (iii) task-specific algorithms for data acquisition. The ‘UGV’ group gathers all resources concerning the use of robots in agriculture, while ‘navigation’ includes works on distinct functions like path planning, obstacle avoidance, Simultaneous Localization and Mapping, and plant row recognition. As the name suggests, the ‘UAV’ group collects all works focused on drones, which may seem unrelated to the object of this review. However, drones share several important technologies with UGVs, such as the use of LiDARs for acquiring 3D point clouds to define digital twins of crops, AI-based models for in-field information collection, and localization and mapping algorithms. As we will discuss later, we believe UAV-UGV collaboration is one of the most promising R&D lines in digital agriculture.
The aforementioned categories, along with ‘reviews’—a group devoted to surveys on farming robotics—account for the majority of our bibliographic database, which aims to be a high-fidelity portrait of recent scientific production on the topic. Most of the bibliographic dataset concentrates on one or more of these subjects. Moving to a more agronomic focus, significant effort is being made to find robotic solutions tailored to the management of specific crops, typically fruit trees, vineyards, or vegetables. These include both monitoring and field operations (group ‘field’). Often, the less populated categories provide the most challenging and stimulating perspectives for future farming, despite the limited literature available. One such category is the rover–drone collaboration (‘UGV-UAV’), where the goal is to complement near-soil information with data acquired from above. UAVs can also support the rapid mapping of unexplored crops to facilitate route planning for UGV navigation. Advanced sensors like high-resolution 3D LiDARs and high-performing RGB-D and hyperspectral cameras are now available at affordable costs, opening up unprecedented possibilities for acquiring large, high-quality datasets (‘sensors’). Very effective wireless communication technology (‘communication’) makes data exchange feasible even in areas without standard smartphone coverage, a common situation in everyday farming, and allows for centimeter-level localization (RTK) via differential GNSS positioning. Today, thanks to significant technological innovations, there is the possibility to shift from, or more interestingly, to pair the cloud-centric approach—where data are sent to remote computers for analysis—with the edge paradigm, where data are processed at the point of collection, i.e., by the rover. This is the main focus of works in the ‘IoT’ category. Such implementations are favored by the recent emergence of edge computing devices (CPUs, GPUs, and hardware accelerators for deep learning models) that offer a suitable performance and limited power consumption (‘HW’ group). Unsurprisingly, the effective implementation of the edge paradigm requires dedicated software design. This, along with the model environments required for the advanced simulation of robots’ responses to real in-field situations, is the focus of the ‘UGV SW’ group. The final group, ‘DT’, stands for ‘digital twin’. Similar to other application fields, agriculture can greatly benefit from this concept, which involves generating 3D models of cultivated plants to provide quantitative measures for use in decision support systems and tools for harvest evaluation. This methodology is in its early stages; there is limited work available, which is mainly focused on vineyards and orchards. As anticipated, our target was to create a two-level bibliographic dataset. We used bibliometric filters for the automatic preprocessing of the large (level 0) dataset, both from existing libraries and those developed in-house from scratch. This approach saved a significant amount of time. However, we believe that such computer procedures often fail to deliver the high-quality results provided by human expert labor. Therefore, a further iteration involving manual annotation and selection was performed. While time-consuming, this process provided us with a refined, high-quality, and well-balanced dataset of more than 150 titles, which is the focus of this work.

4. Architecture and Data Workflow of UGVs

UGVs for agricultural monitoring come in various designs, differing in size, structure, and operational concepts. Most have typical dimensions in the range of 0.5–1 m to facilitate access to crops with narrow interrow spacing, weigh 100 kg or less, and are powered by electric motors using internal batteries (this is not always the case for UGVs designed for crop treatments, which require more power to operate hydraulic pumps and other mechanical tools, often relying on internal combustion engines). Figure 2 shows an effective robot development platform for monitoring, as described in detail in the Introduction. While this example features a skid-steering rover with rubber tracks, other designs using wheels with different drive systems are also possible. Each option has its advantages and drawbacks (e.g., wheel-based systems generally offer a higher operational speed but may struggle in uneven or soft soils, particularly in muddy conditions). Since this review focuses primarily on ICT aspects, we do not delve into vehicle motion models or other mechatronic issues. For those topics, interested readers can refer to [2,4,10,21,22,23].
The functional diagram in Figure 3 illustrates the main components of an exemplary UGV designed for precision agriculture tasks, including optical and positioning sensors, computational modules, and potentially actuators. It also outlines the data flows between these components, demonstrating how sensory inputs are processed and transformed into actionable insights [25]. For simplicity, the diagram groups activities such as harvesting, pruning, and spraying under the term ‘field operations’ and omits ancillary components, such as the power supply system that controls the orientation of photovoltaic (PV) panels [26]. The ‘remote server’ box accounts for the option to perform data exchange using cloud protocols either during field monitoring (via wireless communication network, WCN) or afterward, through WCN or physical links/devices (e.g., cables, memory device transfers). This step is well-suited for non-real-time operations, such as the generation of prescription maps or information for digital twins.

5. Sensors

Positioning sensors provide real-time location data, allowing the vehicle to navigate accurately within its environment. These sensors, such as GNSS, compass, and inertial measurement units (IMUs), ensure precise movement and alignment, which are crucial for tasks like mapping and path planning. Additionally, optical sensors play a key role in reconstructing the spatial context in which the UGV operates, enhancing navigation efficiency by avoiding obstacles and enabling more effective interaction with the environment. In terms of interacting with the environment, optical sensors, such as RGB or infrared cameras, provide detailed visual information about objects or a terrain. This allows the UGV to identify and analyze crops, soil conditions, or obstacles, making informed decisions about actions like weeding, harvesting, or applying treatments with precision. In particular, the primary purpose of RGB sensors is to generate a stream of plant images that is accurate enough to detect potential crop issues through image classification using deep learning models. Finally, LiDARs can perform tasks ranging from navigation and mapping (2D models) to more sophisticated analyses of the surrounding environment and the generation of accurate geometric models of objects (3D models).
To better understand the prevalence of technology and the diffusion of various sensors, we used Google Scholar to conduct a simple statistical analysis of scientific works on the use of UGVs for agriculture published in the last decade. We employed the following queries: UGV AND agriculture AND “RGB-D”; UGV AND agriculture AND RGB -RGB-D; UGV AND agriculture AND (multispectral OR “multi-spectral” OR hyperspectral OR “hyper-spectral”); UGV AND agriculture AND LIDAR; UGV AND agriculture AND (GNSS OR GPS); UGV AND agriculture AND IMU. The results, reported in Figure 4 (left), indicate a general exponential growth in interest over the years and, as expected, highlight the importance of GNSS devices for precise localization within the crop, along with the valuable ability to generate prescription maps for deferred precision treatments. LiDARs are gaining popularity, partly due to the recent availability of affordable, high-performance 3D models. Optical sensors are also widely used, as demonstrated by the aggregation of occurrences of RGB, RGB-D, and infrared sensors, although some overlap is present in the publications.

5.1. RGB-D Cameras

RGB-D cameras capture both color and depth data, making their use essential in robotics and unmanned ground vehicles for navigation, obstacle avoidance, and object manipulation. Their ability to provide detailed scene reconstructions and spatial awareness enhances object recognition, path planning, and environmental interaction, highlighting their critical role in these advanced applications. Although the RGB component is extracted similarly to conventional cameras, using an array of photosensitive elements to capture red, green, and blue light, depth information is obtained through various methods, primarily time-of-flight (ToF), structured light, and stereo vision. ToF cameras emit infrared light pulses and measure the time taken for the light to reflect back from the scene. The time delay is proportional to the distance, enabling the calculation of depth for each pixel. Structured Light (SL) cameras project a known pattern of light onto the scene. The deformation of this pattern when viewed from a different angle is analyzed to compute the depth. The stereo vision (SV) working principle uses two or more cameras positioned at different viewpoints. By comparing the disparity between corresponding points in the images captured by each camera, depth information is derived using triangulation [27,28,29].
While ToF cameras are highly effective in controlled lighting conditions, their performance can degrade in external sunlight scenes due to interference from ambient light, reduced SNR, overexposure, thermal effects, increased power demands, and limited range. These challenges necessitate additional design considerations and mitigation strategies for outdoor applications. To improve daylight accuracy in ToF RGB-D cameras, several strategies are put in place, such as optical filtering (i.e., using narrow-band infrared filters to block out the broad spectrum of sunlight while allowing the specific wavelength used by the ToF camera to pass through) and high-frequency modulation (i.e., modulating the emitted light at a high frequency that is less likely to be affected by ambient light).
An overview of the main RGB-D cameras is outlined in Table 1. Different models for outdoor use are compared according to their different characteristics, such as their principle of working, shutter type, measuring range, pixel resolution, frame rate, and interface.
In the authors’ experience, global shutter technology may be a better solution than rolling shutter technology. In the latter approach, the optical sensor is exposed to incoming light in a row-by-row sequence, unlike the global shutter, where the entire sensor is read simultaneously. This causes a delay in reading the last ‘pixel’ compared to the first (readout time), which can degrade image quality due to the motion of the UGV. While global shutters were traditionally associated with more expensive cameras, affordable options are now available on the market. For example, the Orbbec Gemini 336 L in in Table 1 can be purchased for less than EUR 500.

5.2. RGB Cameras

Standard RGB cameras are commonly employed in precision agriculture for tasks such as crop monitoring, growth analysis, and visual inspections. These cameras capture high-resolution color images that help farmers assess crop conditions by detecting visual signs of anomalies like diseases, pest damage, or nutrient deficiencies. While a broad range of commercial cameras are available, harsh field conditions such as dust, water, and temperature extremes limit the choice to models that meet stringent environmental protection standards.

5.3. Multispectral Cameras

Multispectral remote sensing involves capturing data across multiple wavelengths of light, including those outside the visible spectrum, such as infrared data. This technology is crucial in agriculture because it reveals information about plant health, water content, and nutrient levels that is not visible to the human eye. By analyzing the reflectance in different spectral bands, farmers can monitor crop stress, detect disease, and optimize inputs like water and fertilizers. While UAVs are the primary platform for multispectral imaging due to their ability to quickly cover large areas, UGVs can also be used for specific ground-level tasks, such as soil nutrient mapping, weed detection, and close-up plant health assessments, providing a more detailed view where aerial data may be insufficient. In addition to multispectral cameras, hyperspectral cameras offer even greater precision by capturing data across hundreds of narrow spectral bands. These high-resolution spectral data enable a more detailed analysis of plant health, soil composition, and disease detection, making hyperspectral imaging especially valuable for identifying subtle changes in crop conditions. When mounted on UGVs, hyperspectral cameras allow for detailed, ground-level assessments of individual plants or specific areas in the field [30]. Unfortunately, these cameras can be quite costly. Additionally, infrared sensors become more expensive as the wavelengths of light they capture move further away from the visible range. Thermal cameras are also used in precision agriculture, particularly for detecting plant stress caused by water deficiencies. By measuring the heat emitted by plants, thermal cameras can identify areas where crops are experiencing water stress, allowing for targeted irrigation. On UGVs, thermal imaging can provide a close-up, real-time monitoring of soil moisture and plant temperature, offering a more granular approach to managing water use efficiently [31,32].

5.4. LiDARs

LiDAR (Light Detection and Ranging) is a cutting-edge technology that uses laser pulses to measure distances, creating highly accurate 3D maps of the environment. In the context of unmanned ground vehicles LiDARs are essential for enabling autonomous navigation, obstacle detection, and terrain mapping. By providing detailed spatial data, LiDAR systems allow UGVs to navigate complex environments, such as agricultural fields, with precision and efficiency. The basic working principle of LiDAR involves a Laser Emitter, which fires short pulses of laser light towards the environment, a Receiver, which captures the reflected light from objects, measuring the time it takes to return, and a Scanning System, which aims the laser beam to cover a large area. The most widely used scanning solution is mechanical spinning, offering a high signal to noise ratio. However, its bulky design and long scanning time make it sensitive to vibrations. Other common scanning systems include microelectromechanical systems (MEMS), flash, and optical phased arrays. MEMS are a more compact version of mechanical spinning and can be programmed to adjust the scanning trajectory for better system integration. Flash systems, which use optical diffusion, are an alternative to rotating scanning systems, but their performance is limited by the power of the laser source, especially for long-range measurements, as a single diffused laser must cover the entire area of interest. Optical phased arrays modulate the phase of light, allowing for control over the wave-front shape and eliminating the need for time-consuming laser scanning. This method is particularly suitable for high-vibration, heavy-duty applications [33]. A Processing Unit calculates the distance based on the time delay, combining these data with the angle of the emitted laser to generate a 3D point cloud. In agriculture, UGVs equipped with LiDAR use these point clouds to sense obstacles, map the terrain, and monitor crop growth. LiDAR is highly effective in outdoor environments due to its ability to operate under various lighting conditions and over large distances [34]. While 3D LiDARs provide a complete understanding of the environment, 2D LiDARs offer simpler and cost-effective solutions for tasks that do not require vertical scanning, such as basic navigation, collision avoidance, and mapping in flat or controlled environments. They are often found in robots or UGVs that operate on level surfaces like warehouses or in controlled environments. An overview of the main LiDARs is outlined in Table 2. Different models for outdoor use are compared according to their different characteristics, such as angular resolution, scan rate, max measuring range, field of view, accuracy, and interface.

5.5. GNSSs and IMUs

GNSS (Global Navigation Satellite System) refers to satellite constellations that transmit signals from space to the Earth’s surface, enabling precise geopositioning. Various GNSSs exist, including GPS (USA), Galileo (EU), GLONASS (Russia), and BeiDou (China). The accuracy of GNSS technologies varies: around 4 m for standard smartphones (expected to improve to sub-meter accuracy in the coming years); up to 2.5 cm in one minute with PPP (Precise Point Positioning); and centimeter-level accuracy or better with differential GNSSs, such as RTK (Real-Time Kinematic), which uses a ground-based station in addition to the mobile unit onboard the UGV [35,36]. RTK devices specifically designed for installation on agricultural machinery for precision farming have been available for a long time and are standard equipment in new-generation tractors. When purchased aftermarket, they are often excessively expensive, largely due to their robust construction and high protection ratings (IP) against dust and liquid intrusion, which may not fully justify the cost. Fortunately, commercial edge solutions from the ICT sector are available today at significantly lower costs, often one order of magnitude cheaper [37,38]. These devices go beyond simple plug-and-play sticks for basic GNSS signal streaming, offering fully flexible precise positioning and improved position-fix capabilities.
IMUs (Inertial Measurement Units) provide measurements of acceleration, angular velocity, and, in some cases, magnetic fields. These data are used to determine the motion and orientation of an object, such as an entire UGV or the lens of an RGB camera directed at crops. Even if the costs of industrial-grade IMUs are wildly variable, the price of a reasonable model without a magnetometer for UGVs for field monitoring is in the range from EUR 100 to 1000. IMUs are often used in conjunction with GNSSs, LiDARs, and even RGB cameras within multi-sensor and data-fusion approaches [39,40].

6. Governing Software

Based on an overview of the support technologies for terrestrial mobile robots used for autonomous mapping in agriculture, we outline the following aspects related to the operating systems, programming languages, and simulation environments commonly employed in rover platforms.

6.1. Operating Systems

The Robot Operating System (ROS) is predominant in the papers examined in the context of our state-of-the-art study on the use of UGVs for agriculture. Other robotic operating systems in the field of agriculture were not found to be relevant, which is why they were not discussed in detail. Remarkably, when major ICT industry players propose solutions for robot implementations, whether computational devices or sensors, they primarily focus on ROS environments. This is the case, for instance, of Nvidia with Isaac ROS [41] and Intel with RealSense cameras [42]. Agricultural rovers tend to use robust and high-efficiency operating systems that can handle the complex operations required for autonomous navigation and mapping tasks. Despite its name, the Robot Operating System is not an operating system in the traditional sense, but rather a framework for implementing robot software. It is widely adopted for demanding farming applications thanks to its modular architecture and the wide range of libraries and drivers that are available. Two main versions of ROS [43,44,45,46,47] can be used: ROS1 [7] and ROS2 [48,49,50,51]. ROS1, launched in 2007, offers a well-supported and established structure, ideal for many existing systems, with a broad ecosystem of packages and a large community. However, it is being deprecated (support is scheduled to end in 2025), encouraging the transition to ROS2. The latter, introduced in 2017, is designed with significant improvements for supporting real-time systems, enhanced communication security, and the more efficient management of distributed devices, making ROS2 particularly suitable for more demanding applications and use in critical environments. We summarize the main differences as follows:
  • Communication Architecture:
    ROS1 uses a centralized communication model with a single node (the ROS Master), which facilitates node connections but is a possible bottleneck and a point of failure.
    ROS2 adopts a decentralized architecture without a master node, using the DDS (Data Distribution Service), which improves system scalability and resilience.
  • Real-time Performance:
    ROS1 does not provide native support for real-time operations.
    ROS2 supports real-time capabilities, allowing for the more precise timing of operations, which is crucial for applications such as motor control and other critical robotic functions.
  • Multiplatform Support:
    ROS1 is primarily developed for Unix-based operating systems, mainly Ubuntu
    ROS2 is more portable and supports a variety of OSs, including Windows, macOS, and various Linux distributions: it is more flexible for different development environments.
  • Security:
    ROS1 has limited security options, which can be a concern in commercial applications. Data privacy in agriculture is important!
    ROS2 introduces advanced security features such as communication encryption, node authentication, and authorization, improving security in broader and critical scenarios.
  • Communication Management:
    ROS1 uses a simple TCP/UDP-based communication model, which may not be efficient in complex or large networks.
    ROS2 uses DDS for communication management, which is highly scalable and efficient for large-scale distributed systems.
  • Usability and Documentation:
    ROS1 has a large user base and extensive documentation with many years of development, making it accessible for new users.
    ROS2 is still expanding its documentation and examples but is quickly gaining ground due to its modern architecture and new features.
These differences make ROS2 the go-to choice for new projects in agriculture that require high performance, enhanced security, and multiplatform support, while ROS1 may still be adequate for less critical applications or for those with existing ROS1 experience and the existing infrastructure.

6.2. Simulation Environments

Simulation environments are essential in the development and testing of robotic applications, especially for rovers and UGVs used in agriculture. In fact, developing robotic systems for farm operations relies heavily on simulation as a vital and dependable method for evaluating different control algorithms for robots or robotic arms. Unfortunately, most existing simulation software primarily focuses on urban, traffic, or industrial settings. The simulation of off-road scenarios is relatively rare, and there is a noticeable absence of a dedicated simulator for agricultural robotics research [52]. The most well-known simulation environments are as follows:
  • Gazebo allows for the simulation of robots in both indoor and outdoor settings, including sensors and actuators, with a high degree of realism thanks to precise physical models. This makes it useful for testing navigation and monitoring in complex agricultural environments.
  • CoppeliaSim (formerly V-REP) offers detailed control over simulations, supporting various physical engines. This means that it is able to deal with complex interactions between rovers and various types of terrain or crops.
  • Webots is the development environment for the simulation of autonomous mobile robots. It is known for its ease of use and extensive support of robot models. It is used to prototype and test navigation and terrain mapping algorithms.
  • Nvidia’s Isaac Sim is an advanced simulation environment that utilizes GPU acceleration for realistic and detailed simulations, optimized for robotics and AI. This makes it particularly useful for simulating agricultural rovers that use AI for the visual analysis of crops.
The most important supported features of such environments, with relevant bibliography references, are presented in Table 3.

6.3. Programming Languages and Frameworks

Among modern languages, C/C++ are often preferred for the low-level programming of hardware components and modules requiring high efficiency and real-time operations, thanks to the availability of highly optimized compilers. Python, on the other hand, is frequently used for writing high-level scripts and integrating artificial intelligence algorithms, such as those used for computer vision and machine learning, due to its simplified syntax and vast amount of supporting libraries. AI-based algorithms, such as image or video classifiers based on deep learning models, typically rely on specific frameworks for machine learning programming, like PyTorch [60] or TensorFlow [61]. Both enable automatic multi-GPU and parallel computing and interface directly with common development environments for high-performance GPU-accelerated applications, such as CUDA [62], in a “transparent” way for the programmer, who does not need to directly invoke the environment’s APIs. Even when initial attempts do not succeed, users can rely on a large, well-established community of developers and extensive documentation, which facilitate the implementation of more complex tasks. While CUDA was a pioneering initiative, providing efficient libraries for the high-performance implementation of machine learning and linear algebra tasks using GPUs as hardware accelerators, it is limited by being a proprietary software that only works with Nvidia devices. Alternative open software projects exist, such as Intel oneAPI [63] and AMD ROCm [64]. The first is a multi-architecture developer tool for accelerated computing across CPUs, GPUs, and FPGAs. AMD ROCm is optimized for demanding deep learning models, such as generative AI, and HPC applications on AMD GPUs. Despite the momentum gained by these alternatives, CUDA remains the benchmark in the field due to its long-term, consolidated, and extensive system of mathematical libraries and models. OpenCV [65], on the other hand, is the reference library for computer vision, offering more than 2500 algorithms through an open-source paradigm. In agriculture, it can be exploited for specific, important tasks involving image processing techniques, such as the Hough transform for detecting crop rows [66,67]. Also, derivatives of OpenCV dedicated to plants have been introduced, like PlantCV [68], and are commonly used for data acquisition and analysis (see, for instance, [30]).
Several of these libraries/environments do not directly support the C language or have deprecated it. This raises the question of whether it is worth transitioning to C++, which can be a time-consuming solution and requires significant effort while leading to only marginal improvements in throughput. A practical alternative is the so-called (informally) C+, which is not an existing programming language, but rather an adaptation of C syntax (wherever possible) to C++ headers. While Python is very convenient for prototypal programming, it is not the best choice for fully exploiting next-generation server CPUs, which can feature up to 128 cores and require a large amount of memory. For such tasks, multi-thread programming in C/C++ with OpenMP [69] is a very effective solution in terms of both performance and simplicity (parallelizing a structured loop with OpenMP simply requires introducing a single additional instruction). This is also an appealing possibility for edge devices: Arm CPUs, such as those onboard the Nvidia SoM/SoC listed in Table 4, are multicore architectures that can greatly benefit from the parallel implementation of algorithms. This is especially relevant given that the CPU performance of SoM/SoCs is often less consistent than that of their GPU counterparts.

7. On-Board Hardware

Crop monitoring using UGVs requires advanced techniques for perception, localization, and movement planning. These processes often utilize deep learning algorithms that demand significant computational power, typically reserved for high-powered computers, which are not always suitable for mobile robotic platforms. Therefore, for effective implementation of these algorithms, UGVs must be equipped with state-of-the-art integrated computers. These devices should not only provide the required computational power but also be lightweight, energy-efficient, and compatible with the space and weight constraints typical of agricultural robotic platforms. UGVs used for agriculture operate outdoors in challenging environments, such as uneven terrain with soft and slippery soil, dust, vibrations, and often harsh weather conditions. Rugged hardware is essential to protect sensitive components and ensure operational continuity.
The CPU is the all-purpose computational device of the UGV, hosting the ROS node, managing navigation tasks, handling I/O workflow from sensors and RTK localization within the crop, and coordinating the general activities of the machine. The traditional CPU family is the ×64 architecture, based on a 64-bit instruction set introduced by AMD and later adopted by Intel. Until recently, this was the de facto standard for consumer and professional CPUs, providing high computational power and a complete system of governing and mathematical software, including ROS, TensorFlow, PyTorch, optimized C/C++ compilers, Python implementations, OpenCV, and all other tools necessary for advanced autonomous robots [58,74,75]. Typically, these CPUs are paired with consumer Nvidia GPUs equipped with CUDA to handle computationally intensive tasks related to deep learning or general linear algebra algorithms (GPGPU, general-purpose computing on GPUs). Recent Intel 14th gen and AMD Ryzen 7000 Series consumer CPUs are undoubtedly powerful and suitable for the tasks mentioned above. However, they have a significant drawback: their energy consumption. With a sustained power dissipation (thermal design power, TDPs) of up to 250 W, their prolonged field use may be seriously limited. Even in other application fields, this energy consumption is a well-known drawback of ×64 devices.
As a result, an alternative architecture from the smartphone world has recently made an assertive appearance in the PC market: Arm devices, based on the RISC instruction set (while ×64 is based on CISC). Although Arm processors have been also used for high-performance computing (the Japanese Fugaku installation was the world’s fastest computer on the top 500 list from 2020 to 2022), typical Arm implementations are often preferred for mobile applications, where energy autonomy and lightness are priorities, given that even the most energy-consuming devices in this family rarely exceed 50 W. This makes them good candidates for UGVs in agriculture, as they support monitoring and basic navigation tasks well [76,77,78]. While Arm CPUs tick many boxes related to farming UGVs, a significant question remains: are they powerful enough to manage the new generation of autonomous rovers for agriculture, with improved features and the need to face more demanding challenges? The use of artificial intelligence to analyze the data collected by UGVs in real-time enables the early diagnosis of issues such as infestations or nutritional deficiencies but raises the need for hardware accelerators for AI inference, a task that cannot be performed by the CPU within the expected time. Hardware acceleration facilitates on-site AI inference, reducing latency and improving system responsiveness. It is provided by (i) the above-mentioned GPUs equipped with CUDA; (ii) NPUs, i.e., ASICs (application-specific integrated circuits) exclusively designed for deep learning inference, hosted either on systems on a chip (SoCs) or systems on a module (SoMs); and (iii) hardware accelerators for AI inference, which are similar to NPUs but attachable to other devices via fast USB connections, such as the Coral TPU and Intel Neural Compute Sticks [77]. A statistical analysis of the prevalence of computational devices in UGVs for monitoring (excluding CPUs) based on the past decade’s publications is illustrated in Figure 4 (right). These data were obtained using Google Scholar with the query template ‘UGV AND agriculture AND [device type]’, where the device type was alternately GPU, FPGA, ASIC, or VPU. While the use of ASICs and VPUs as standalone tools appears negligible, it should be noted that these devices can be integrated into several optical cameras (some listed in Table 1) to perform inference for simplified image classifiers.
A comparison of the AI accelerators is provided in Table 4. Interestingly, new generations of consumer ×64 CPUs have been announced and will be released by AMD and Intel by the end of 2024. These systems-on-a-module come equipped with CPU, GPU, and NPU on a single device. While their AI performance may not match that of the most powerful Nvidia devices, they offer an intriguing compromise among the various components (with CPUs being significantly more powerful than Arm) within a framework of limited power consumption at an appealing price.
The Nvidia GPUs [50,78,79] come in either desktop or edge versions. The former are mainly represented by the current GeForce RTX 40-series powered by the Ada Lovelace architecture, while the latter are the Jetson series [55,59,75,79,80,81]. More precisely, the Jetson series consists of systems on a chip (SoCs) with their own CPU, GPU, and, in the most powerful models, a hardware accelerator for AI inference called NVDLA. The Google Coral TPU [76,78] brings accelerated machine learning inference to existing systems via a USB port, offering an impressive energy consumption to performance ratio (0.5 W/TOPS, where TOPS means tera operations per second). Despite its release in November 2019, the Coral TPU can still perform image classification at resolutions of up to 513 × 513 pixels in 52 ms [82], enabling real-time analysis at approximately 19 frames per second (fps). This performance is more than sufficient, as a UGV moving at 3 m/s with a crop coverage depth of 30 cm would only require 10 fps to capture all images necessary for monitoring the plants. However, these data need in-field verification, especially when tested on more recent deep learning classifiers. The Nvidia Jetson series varies significantly in performance, with AI capabilities differing by more than 500-fold between models. The most powerful, the Orin AGX, delivers a TOPS comparable to entry-level GeForce cards but with a much lower TDP. With 0.22 W/TOPS, the Jetson Orin is the undisputed leader in terms of its energy consumption to performance ratio. Its AI inference capabilities, combined with its large memory capacity, provide it with surprising applications, such as running large language models (LLMs) like Llama 2 with 70 billion parameters in real-time [83], theoretically allowing the UGV to engage in human conversation beyond standard image classification tasks. Currently, the Jetson series is likely the best candidate for UGV implementation in agriculture, although the need for rugged versions can double the cost. When faced with strict budget or energy constraints, a low-end model like the Nvidia Jetson Orin Nano, listed in the table, can be selected. Unlike the Nvidia GeForce and the Coral TPU, the Jetson SoCs include a CPU that, even in the high-end Orin model, is a modest Arm Cortex-A78AE, launched years ago for the automotive industry with limited computational power. This results, in our view, in a somewhat unbalanced SoC, featuring a state-of-the-art AI accelerator paired with a CPU that may struggle to meet the demands of modern, high-performance UGVs.
We conclude this section by noting several exciting innovations on the horizon from consumer hardware manufacturers. New Arm CPUs for Windows-based (and, more interestingly for us, Linux-capable) PCs are being introduced, such as the Qualcomm Snapdragon X Elite, a multi-core chip capable of delivering 45 TOPS for AI inference with a TDP of no more than 50 W. Additionally, traditional ×64 CPU manufacturers, Intel and AMD, are releasing new generation devices aimed at matching the performance-to-TDP ratio of Arm chips. The upcoming AMD Ryzen AI 9 and Intel Core Ultra 2nd generation CPUs will feature at least 40 TOPS for AI inference (the minimum requirement for hosting Microsoft’s new AI-based assistant, Copilot) with reduced TDPs—less than 40 W for the high-range, newly announced Intel model. In summary, while the Nvidia Jetson is currently the most suitable choice, with the Coral TPU being a practical option for budget solutions with moderate performance requirements, the choice of hardware and technology for UGVs being used in agriculture should consider the new consumer CPUs, which are expected to be made available in the last quarter of 2024.

8. Navigation

UGVs navigation can be defined as the task of mapping the surrounding environment, localizing the robot position in space, path-planning, applying the movement, and avoiding dynamic obstacles [84]. Navigation capabilities are crucial for any UGV, regardless of its purpose in field operations [85]. These combinations of activities are critical and computationally expensive to achieve in an agricultural context, which requires the need to navigate effectively through heterogeneous environments, which can vary significantly due to natural and artificial obstacles, terrain features, and different weather conditions. Considering each aspect separately, it is possible to define the constraints, challenges and available solutions.

8.1. Mapping

The rovers are equipped with a variety of sensors, including LiDAR, radar, and cameras, to create detailed maps of the agricultural environment [86]. These maps not only assist in navigation [87,88] but are also crucial for monitoring the health of crops, assessing growth, and identifying areas that require special attention. The mapping process can greatly benefit from the use of advanced imaging technologies and the analysis of collected data to improve agricultural practices. Navigation is a complex task that involves the construction of a set of knowledge by the rover, most of it acquired in real-time and constantly updated. A common technique is the Simultaneous Localization And Mapping (SLAM) algorithm, which enables an autonomous robot to incrementally construct a comprehensive map of an unknown environment, continuously gathering reliable observations, which are used to estimate its own motion (egomotion) and current location. This process allows the robot to automatically explore the environment, and to build and refine the map, while simultaneously detecting obstacles and estimate its position in the environment. As reported by [89], the most common SLAM techniques are Visual SLAM, LiDAR SLAM [90], and multi-sensor Fusion SLAM [91], with each one depending on a different type of input sensor used for rover navigation.

8.2. Localization

The ability to define the rover position and orientation within the environment is fundamental for efficient navigation. Modern localization techniques such as GNSS (see Section 5.5) coupled with IMU are commonly used to ensure accurate localization. This is particularly important in precision agriculture, where the exact position of the rover directly impacts the effectiveness of operations such as seeding, fertilization, and irrigation. Localization is crucial when a UGV operates in the same environment as humans, where high levels of safety and efficiency are mandatory [92]. However, navigation in cluttered environments is challenging, particularly in GNSS-denied areas such as under a sub-canopy. In such scenarios, vision-based algorithms that use depth cameras or LiDAR sensors are preferred [93].

8.3. Path Planning

The rovers use advanced algorithms to move autonomously and efficiently in the fields while avoiding obstacles and ensuring comprehensive coverage. These algorithms must consider various factors, such as terrain topography, the presence of obstacles, the need to optimize routes to maximize the efficiency of agricultural operations [94,95,96], and path traversability in relation to UGV characteristics [97]. Path planning can be achieved at global or local levels, depending on the available environment information, and there are different types of environment modeling and different path evaluation methods. As reported by [98], it is very common to execute a global planning in order to identify the main targets and then execute a local planning for better smoothness and trajectory definition. Path planning is often supported by artificial intelligence to allow for it to dynamically adapt to changing conditions. As reported by [99] the most popular path planning algorithms are as follows:
  • A* Algorithm, a well-known popular pathfinding algorithm that searches for the shortest path from a start point to an endpoint by considering both the cost to reach a point and the estimated cost to the destination. It is widely used in agricultural UGVs for its efficiency and accuracy in path planning [100,101].
  • Dijkstra’s Algorithm, which calculates the shortest paths from a single source node to all other nodes in a weighted graph. Its application in UGVs involves determining the optimal routes for tasks such as soil sampling and weed management.
  • Rapidly exploring Random Trees (RRT) algorithm, designed for exploring high-dimensional and cluttered environments, is often used to find feasible paths in real-time scenarios [102]. This algorithm incrementally builds a tree of random samples from the UGV’s starting position towards the goal, avoiding obstacles while randomly exploring the space. Its speed and low computational intensity means that it is adopted in many real-world scenarios. Several variations have been developed and adopted, such as RRT*, which ensures asymptotic convergence to the optimal path, a feature not guaranteed in the original algorithm [103].
Even though there are many works related to navigation and path-planning, most of them are based on static environments without moving objects (obstacles and targets) [101]: future studies will focus on dynamic environments where swarms of heterogeneous robots and humans operate in the same environment. In summary, the combination of these technologies allows agricultural rovers to operate autonomously, improving the efficiency and effectiveness of agricultural operations. With the advancement of sensor technology and AI, navigation, localization, and mapping are expected to become increasingly accurate and reliable, opening up new possibilities for modern agriculture.

9. Algorithms, AI Models and Digital Twins

In this section, we analyze recent works focusing on the use of Artificial Intelligence for the autonomous management of the UGVs, and algorithms for feature extraction and the definition of the Digital Twin paradigm.

9.1. Algorithms in Agricultural UGVs

Algorithms are crucial for enhancing the operational efficiency and precision of UGVs. This include navigation, target detection and recognition, sensor data noise reduction, and actuator control (though this work does not focus on the latter). UGVs equipped with cameras and sensors use image processing algorithms to analyze visual data and make informed decisions:
  • Edge Detection algorithms, such as Canny and Sobel, are used to identify the boundaries of objects within images. This is crucial for tasks like crop row detection [96], the diagnosis of plant health issues, and discrimination between crops, weeds, and the soil background.
  • Image segmentation algorithms assign each pixel to a class in order to partition the image content at a fine grade level and simplify analysis. Techniques like k-means clustering and watershed segmentation help to isolate plants from the background for precise monitoring.
  • Feature Extraction techniques identify specific characteristics within an image, such as texture, shape, and color. These features are then used to diagnose plant health, check growth, detect pest infestations, or recognize weeds. The most common are the Scale Invariant Feature Transform (SIFT) algorithm [104], Hough transform, and Otsu method [105]. When multispectral images are acquired, the Normalized Difference Vegetation Index (NDVI) and its variations could be used to estimate the plant growth and vegetation phase, combining near-infrared and red channels acquired by sensors [106]. Other authors, as in [107], use data generated by LiDAR sensors to compute the Leaf Area Index (LAI) on detected plants. Environment features are detected by methods like FAST, ORB, VINS, and BASALT in SLAM navigation to estimate orientation and movement [108].
In order to accomplish their complex tasks in the most accurate and efficient way, agricultural UGVs embed multiple sensors and combine the generated data to enhance the accuracy and reliability of the information collected. As reported by [109], by using different types of sensors, it is possible to improve accuracy and overcome the limitations of each sensor type. However, due to the harsh environmental conditions, the data provided by sensors are commonly affected by noise, so the data must be cleaned before being used; algorithms like Kalman filter (and its variations) allow for estimates of unknown variables by processing a series of measurements over time, even when the measurements contain noise and inaccuracies. This is also used with UGVs to fuse data from GNSS, IMU, and other sensors [110,111]. Control algorithms govern the movements and actions of UGVs based on sensor inputs and desired outcomes. The main types are as follows:
  • Proportional–Integral–Derivative (PID) controllers, which are widely used in UGVs for maintaining the desired speed, direction, and stability. They adjust the control inputs to minimize the error between the desired and actual states [112].
  • Fuzzy logic controllers, which handle uncertainties and imprecisions in UGV operations by using fuzzy set theory. They are applied in scenarios where traditional control methods may struggle, such as variable terrain and crop conditions, or for collision avoidance [113].
  • Adaptive Fuzzy PID (AFPID) controllers: these systems modify their parameters in real-time to cope with changing environmental conditions and uncertainties [114]. Such adaptability is crucial for UGVs operating in dynamic agricultural fields with a dynamic environment.
Algorithms are the backbone of agricultural UGVs, enabling them to perform complex tasks, from path planning and image processing to sensor fusion and control. Continued advancements in algorithm development will further enhance the effectiveness and reliability of UGVs, contributing to more sustainable and productive farming practices [115].

9.2. AI in Agricultural UGVs

AI is a long-term, established field of Computer Science that focuses on creating systems capable of performing tasks that typically require human intelligence [116,117]. These tasks include learning, reasoning, problem-solving, perception, and language understanding. At present, AI is involved in a wide range of sectors, transforming industries by automating processes, improving efficiency, and enabling new capabilities. In agriculture, the use of AI technologies, especially for Machine Learning (ML) characterization, is becoming widespread and encompasses all operative phases: pre-, mid- and post-production [118]. The possibility to automate entire production processes enhances farm efficiency with the use of self-guided vehicles and the autonomous execution of farm duties such as detecting pests, predicting yield, spraying, pruning, mowing, seeding, thinning, and harvesting [105,119,120]. ML plays a crucial role in both the planning and execution phases of UGVs’ activities. Data gathered by fully autonomous robots can be utilized by Decision Support Systems (DSS) to enhance production and use resources sustainably [121,122]. ML solutions can be divided into Traditional Machine Learning (TML) techniques and Deep Learning (DL) models [123]. As reported in [124], classical image processing and TML techniques have been extensively applied in various agriculture applications, such as SVM [125], clustering [104], decision trees [126], random forest [127], AdaBoost [128], and RANSAC [129], to name just a few. However, these methods come with significant limitations. One major challenge is the necessity of the meticulous selection of suitable algorithms for tasks such as feature extraction, shape detection, and classification. Additionally, these techniques often require partial control of the environment, which can include the use of artificial backgrounds or lighting to ensure accurate and consistent results. In contrast, DL models—a subset of ML—analyze vast amounts of agricultural data collected over time and are capable of great flexibility, and their power enables decision-making based solely on visual information. For example, Visual Simultaneous Localization And Mapping (VSLAM) algorithms utilize data from one or more cameras to detect obstacles and determine the rover’s position in space, in contrast with classical approaches that require the fusion of multiple sensor data and many algorithms for their integration [84,130]. Input images can also be used to compute the movement of a robot (visual odometry) without using external and dedicated sensors thanks to recent DL research [131]. Depending on the artificial neural network architecture, it is possible to achieve different performances (accuracy and speed) in different tasks (detection, recognition, segmentation, prediction, data fusion) and conduct a non-destructive detection and analysis of plants [132]. The most-used DL models/architectures are as follows:
  • Convolutional Neural Networks (CNNs) are widely used in UGVs for classification, detection, and semantic segmentation [133,134,135,136,137,138,139,140,141]. These models can analyze images of crops to identify specific plant health issues, such as nutrient deficiencies or pest infestations, and can also be used for UGV navigation, detecting obstacles, and recognizing viable pathways, such as tree rows. The most relevant DL models can be grouped into three categories: (a) Simple classification models: These models detect the presence of specific object in an image without localizing it, but by identifying patterns in the extracted features. Examples include AlexNet, VGGNet, MobileNet, and ResNet, which are often used as encoders in more complex deep learning networks. (b) Single-stage methods, based on regression, which directly predict object locations and classifications in a single pass through the network. YOLO (You Only Look Once) and SSD (Single Shot MultiBox Detector) are the most well-known examples. (c) Two-stage methods, which first generate candidate regions and then classify them. These methods are generally more accurate but more computationally demanding. Two-stage models includes R-CNN and its improved versions like Faster R-CNN and Mask R-CNN. Traditionally, due to the limited computational power of UGVs and the need for speed in detection, single-stage models were predominantly used, despite their lower accuracy. Today, advances in hardware performance (see Section 7) have made some two-stage methods feasible as well.
  • Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) networks are used for analyzing sequential data, such as predicting weather conditions or tracking crop growth over time [142,143]. These deep learning models excel at preserving temporal relationships within data sequences, effectively modeling patterns and dependencies that evolve over time. This ability is due to their architecture, which includes output connections that loop back to some hidden nodes in the input layer, allowing for information to persist. This enables the network to maintain a hidden state that captures information from previous time steps.
  • Reinforcement Learning (RL) models optimize the decision-making processes of UGVs [54,144]. In this approach, UGVs learn to make decisions by receiving feedback from their actions in the form of rewards or penalties, and model training can be achieved in unseen environments. These models are used for path-planning and rover exploration in unstructured environments, where autonomous navigation, efficient resource management, and adaptation to dynamic field conditions are essential. Additionally, RL models are employed to coordinate multiple unmanned vehicles without explicitly defining and coding their behavior [145,146]. By continually learning from their interactions with the environment, these models help UGVs improve their performance over time, leading to more effective and efficient agricultural operations.
As reported in [147], DL models are accurate, fast, and robust solutions for detection and recognition tasks in agriculture and are widely used in UGVs instead of classic Computer Vision algorithms due to their ability to generalize and extract complex features from images with less interference from the background environment. However, the DL accuracy depends on the dataset used to train the model. As reported by [119,147,148,149], there are very few good datasets, which are limited to the most common fruits/vegetables, meaning that operative context provides challenges (e.g., occlusions, variations in illumination, small fruits). Notably, there are few open agricultural datasets, and many more are certainly needed. An informative review on this topic is provided in [150], although it is limited to the year 2020. For a more recent list of open datasets, we refer to [8].
For this reason, data enhancement and augmentation operations are adopted, including techniques such as random cropping, adding noise, image scaling, and random contrast adjustment [136,137,141,151], and transfer learning is used to adapt previously trained networks to new fruits and vegetables and finetune the model [152,153], or synthetic datasets are created [84]. With this diverse array of models, agricultural UGVs can tackle various tasks effectively: ML models handle robust data analysis and predictions, DL models excel in advanced image processing, while RL models optimize decision-making. Integrating these models allows for UGVs to accomplish a broad spectrum of tasks with precision and adaptability, aiding in enhancing crop health, boosting yields, and promoting sustainable farming practices. Although this approach offers significant advantages, it also comes with limitations. One notable limitation is the complexity of integrating and coordinating different AI models within a single UGV system, which can lead to increased development and maintenance costs. Additionally, the need for extensive computational resources (GPUs) to run multiple AI models simultaneously can pose challenges in terms of energy consumption and processing speed, particularly in remote or resource-constrained agricultural environments. Lastly, the effectiveness of these AI models may be limited by the quality and quantity of the training data available, as well as their ability to generalize to diverse and dynamic real-world conditions in agriculture.

9.3. Digital Twins in Agricultural UGVs

Different definitions of Digital Twins (DTs) exist in the literature [154,155,156]. To date, five specific categories of DT have been identified: real entity, virtual entity, communication, data, and service layer. The most generic and widely adopted DT definition was reported by [154]: “A Digital Twin is a dynamic and self-evolving digital/virtual model or simulation of a real-life subject/object representing the exact state of its physical twin at any given point of time via exchanging the real-time data as well as keeping the historical data”. DTs have been used in various environments, ranging from manufacturing to agriculture, with notable advantages in relation to speeding up prototyping and product improvements, reducing costs and waste in testing, preventing/predicting malfunctions, and continuous monitoring [157]. In agriculture, DTs have been used with positive results to model individual plants or entire fields of the same vegetable in order to predict the optimal harvesting period, to estimate potential yield [158], for the management of nutrients in the soil, or for irrigation scheduling and the prevention of pathogens [159,160,161,162,163,164]. The use of UGVs for continuous crop monitoring fits well in this context, as they can greatly contribute to the collection of high-quality data, such as clouds of 3D points from plants, which are used to create dynamic and accurate 3D models of the crops over time.
On the other hand, there are very few examples of the use of DTs for agricultural robots, as reported by [118,155]. This is due to several intrinsic constraints: (1) the limited computational resources and power management conflict with the need for real-time and complex simulations of DTs; (2) high-speed and reliable internet connections are often unavailable in rural areas, making it impractical to exchange data with a remote/cloud platform; (3) scalability issues arise because each farm has unique characteristics (soil, water, weather, crop varieties) requiring specific modeling; and (4) the agricultural environment is highly dynamic and unstructured, making it difficult to accurately reflect real-world conditions in the virtual model and keep them constantly synchronized. Due to these issues, DTs are not widely used in agricultural UGVs because the virtual model and/or continuous information exchange is often absent. However, (i) the recent availability of novel edge devices with good computational power paired with low energy consumption and (ii) the shift from a purely cloud-based approach to a predominantly edge-based paradigm, can significantly mitigate these problems. In contrast, a virtual representation of the rover and its operating environment is created to simulate tasks in a deterministic and safe manner, for design and test purposes.
Various robot simulation platforms, such as Gazebo, WeBots, and the latest Nvidia Isaac Sim, are available for prototyping and validating robot designs and algorithms (see Section 6.2) without the need for real-life execution. These simulation platforms accurately replicate both the functional and non-functional characteristics of the robot, allowing for observations of its behavior and reactions as if it were interacting in the real world. Robot simulations are already used in industry to design and test physical structures [165] and battery management, but it is also possible to train DL algorithms used by UGVs for navigation, detection, and recognition tasks. Once a satisfactory performance is achieved, the model can be transferred to the physical rover for real-world use. This approach reduces the development time, minimizes the need for field experiments and sensor calibration, and allows for repeatable tests without impacting the real field, thus avoiding potential damage or the disruption of operations [166]. An intermediate approach called Hardware-In-The-Loop (HILT) is supported by some simulation frameworks. In this approach, certain components in the virtual environment are connected to real-world hardware objects, such as camera sensors or Electronic Control Units (ECUs) for cars. The responses from these real hardware components are used as if they were produced by their virtual counterparts, contributing to a more realistic simulation and system test. This solution allows for the testing of critical components of a real system in the most accurate, non-destructive, and repeatable way. The great impact and recent diffusion of DTs has been made possible by the lowering cost of IoT-embedded platforms, the increased bandwidth for wireless communication, and the availability of computational power with advanced AI models. Thanks to the ever-increasing technological availability and the desire to automate and streamline as many aspects of agricultural production as possible, rovers and drones have found widespread use in both the monitoring and field intervention phases. In order to successfully use these tools, it is necessary to go through the design, prototyping, and validation phases of the robotic solutions. Despite their advantages, the implementation of DTs for agricultural UGVs faces several challenges related to (i) data management and processing with limited UGV resources, (ii) the integration of existing agricultural systems and technologies with the virtual platforms, and (iii) the specialized knowledge required for developing and maintaining these advanced technologies.
DT represents a cutting-edge technology and its usage in agriculture offers unprecedented simulation, real-time monitoring, and prediction capabilities. Even though most studies on robot navigation and path planning were initially conducted in simulated environments [95], there are not many research works explicitly related to the use of agricultural UGVs for plant monitoring, due to the previously reported limitations. As reported by [118], the use of DT in agriculture is still in its infancy; therefore, many challenges remain, which must be solved in order to achieve the complete adoption of the DT paradigm.

10. Discussion and Conclusions

Continuous monitoring in agriculture, while impractical for humans due to labor constraints, has the potential to significantly advance everyday farming practices. This approach offers clear benefits for farm economies, environmental protection, and biosphere health, including human health. Thanks to cutting-edge developments in scientific hardware and AI-based data analysis, the continuous monitoring of crops by unmanned, fully autonomous robots is now attainable. However, this does not imply that finished, affordable, robust, and effective UGVs for autonomous monitoring are easily realizable. Instead, the current ICT technology, in principle, allows for the design and implementation of prototype systems with a reasonable cost, capable of sophisticated operations such as in-field navigation, the real-time detection of crop emergencies, precise georeferencing, alert report preparation, and high-quality data collection to maintain a digital twin of the crop over time.
Remarkably, researchers with minimal or no experience in mechatronics can now plan R&D activities in unmanned ground vehicles for agricultural monitoring. This is largely thanks to robotic technology providers who design and build custom robots capable of accommodating various hardware devices and sensors, which are ready to host AI models and digital twins of plants and crops. While these tasks are not trivial, they can largely be managed within an ICT context. This contrasts sharply with more critical applications, such as robots designed for field treatments like spraying, weeding, seeding, pruning, and picking fruits. These UGVs are significantly more complex in terms of mechatronics and their design necessitates detailed, crop-specific agronomic knowledge. On the other hand, recent advancements in ICT have enabled the development of affordable UGVs for monitoring, as discussed in Section 1, Section 5 and Section 7, which address the costs of complete UGV systems, sensors, and on-board hardware. Additionally, operational autonomy has significantly improved, driven by better batteries and new low-power hardware components (see Table 4).
For such rovers, the hardware market for AI inference is becoming quite saturated, with powerful edge devices already available and intriguing new proposals expected in the coming months. Sensors such as RGB-D cameras and 3D LiDARs have advanced significantly, offering high-performance products at affordable prices. This naturally suggest a shift from a pure cloud paradigm to an approach mostly based on edge computing. The use of deep learning technology for image classification has also seen substantial progress, resulting in models with an improved performance that are primarily constrained by the quality of the training datasets rather than the models themselves. Data remain a critical challenge, as is common in other deep learning applications. Additionally, the seasonality of plants and vegetables is a problem, as data collection is restricted to narrow time windows throughout the year, differently from industrial applications. Furthermore, the variability in plant phenotypes and their interaction with the environment and climate can create significant year-to-year differences, which must be included in datasets to train high-quality, generalizable models.
There is ample opportunity for research and development in this area. The concept of the digital twin is particularly promising, as it allows for the continuous remote monitoring of a crop—or even individual plants in low- to mid-density fields—over time. However, the most challenging aspects are likely to involve autonomous navigation in unstructured environments, including irregular terrain and potentially harsh weather conditions. For both mapping and routing, as well as for creating the digital twin, the interaction between UGVs and UAVs appears very promising and definitely merits field experiments.

Funding

This research was funded by the Autonomous Region of Sardinia under project AI and by Ministry of Enterprises and Made in Italy under project SMAART.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The complete bibliographic dataset is available in BibTeX format upon reasonable request for non-commercial purposes.

Acknowledgments

We would like to thank the anonymous reviewers, especially reviewer 3, for their detailed feedback and valuable suggestions on how to improve the quality of the manuscript. Additionally, we thank Rocco Galati for the useful discussions on UGV design and for granting permission to use the image and diagram in Figure 2.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Alatise, M.B.; Hancke, G.P. A Review on Challenges of Autonomous Mobile Robot and Sensor Fusion Methods. IEEE Access 2020, 8, 39830–39846. [Google Scholar] [CrossRef]
  2. Fernandes, H.R.; Polania, E.C.M.; Garcia, A.P.; Mendonza, O.B.; Albiero, D. Agricultural unmanned ground vehicles: A review from the stability point of view. Rev. Cienc. Agron. 2020, 51, e20207761. [Google Scholar] [CrossRef]
  3. Gonzalez-De-Santos, P.; Fernández, R.; Sepúlveda, D.; Navas, E.; Armada, M. Unmanned Ground Vehicles for Smart Farms. In Agronomy; Amanullah, K., Ed.; IntechOpen: Rijeka, Croatia, 2020; Chapter 6. [Google Scholar] [CrossRef]
  4. Wang, Y.; Li, X.; Zhang, J.; Li, S.; Xu, Z.; Zhou, X. Review of wheeled mobile robot collision avoidance under unknown environment. Sci. Prog. 2021, 104, 003685042110377. [Google Scholar] [CrossRef] [PubMed]
  5. Mahmud, M.S.A.; Abidin, M.S.Z.; Emmanuel, A.A.; Hasan, H.S. Robotics and automation in agriculture: Present and future applications. Appl. Model. Simul. 2020, 4, 130–140. [Google Scholar]
  6. Chandra, R.; Collis, S. Digital agriculture for small-scale producers: Challenges and opportunities. Commun. ACM 2021, 64, 75–84. [Google Scholar] [CrossRef]
  7. Fasiolo, D.T.; Scalera, L.; Maset, E.; Gasparetto, A. Towards autonomous mapping in agriculture: A review of supportive technologies for ground robotics. Robot. Auton. Syst. 2023, 169, 104514. [Google Scholar] [CrossRef]
  8. Colucci, F.; Maggio, F.; Pintus, M. Recent Advances in AIoT for Image Classification and Continuous Monitoring in Agriculture. IoT, 2024; paper under submission. [Google Scholar]
  9. Yépez-Ponce, D.F.; Salcedo, J.V.; Rosero-Montalvo, P.D.; Sanchis, J. Mobile robotics in smart farming: Current trends and applications. Front. Artif. Intell. 2023, 6, 1213330. [Google Scholar] [CrossRef]
  10. Etezadi, H.; Eshkabilov, S. A Comprehensive Overview of Control Algorithms, Sensors, Actuators, and Communication Tools of Autonomous All-Terrain Vehicles in Agriculture. Agriculture 2024, 14, 163. [Google Scholar] [CrossRef]
  11. Bazargani, K.; Deemyad, T. Automation’s Impact on Agriculture: Opportunities, Challenges, and Economic Effects. Robotics 2024, 13, 33. [Google Scholar] [CrossRef]
  12. Scopus Document Search. Available online: https://www.scopus.com (accessed on 9 October 2024).
  13. Google Scholar. Available online: https://scholar.google.com/ (accessed on 9 October 2024).
  14. arXiv. Available online: https://arxiv.org (accessed on 9 October 2024).
  15. IEEE Xplore. Available online: https://ieeexplore.ieee.org/ (accessed on 9 October 2024).
  16. NIH National Library of Medicine. Available online: https://pubmed.ncbi.nlm.nih.gov (accessed on 9 October 2024).
  17. Science.gov. Available online: https://www.science.gov (accessed on 9 October 2024).
  18. ScienceDirect. Available online: https://www.sciencedirect.com (accessed on 9 October 2024).
  19. Semantic Scholar. Available online: https://www.semanticscholar.org (accessed on 9 October 2024).
  20. World Wide Science. Available online: https://worldwidescience.org (accessed on 30 September 2024).
  21. Botta, A.; Quaglia, G. Performance analysis of low-cost tracking system for mobile robots. Machines 2020, 8, 29. [Google Scholar] [CrossRef]
  22. Botta, A.; Cavallone, P.; Tagliavini, L.; Colucci, G.; Carbonari, L.; Quaglia, G. Modelling and simulation of articulated mobile robots. Int. J. Mech. Control 2021, 22, 15–25. [Google Scholar]
  23. Botta, A.; Cavallone, P.; Carbonari, L.; Tagliavini, L.; Quaglia, G. Modelling and Experimental Validation of Articulated Mobile Robots with Hybrid Locomotion System. Mech. Mach. Sci. 2021, 91, 758–767. [Google Scholar] [CrossRef]
  24. Robodyne. Available online: https://www.robo-dyne.com/ (accessed on 9 October 2024).
  25. Dhanush, G.; Khatri, N.; Kumar, S.; Shukla, P.K. A comprehensive review of machine vision systems and artificial intelligence algorithms for the detection and harvesting of agricultural produce. Sci. Afr. 2023, 21, e01798. [Google Scholar] [CrossRef]
  26. Quaglia, G.; Visconte, C.; Scimmi, L.S.; Melchiorre, M.; Cavallone, P.; Pastorelli, S. Design of a UGV powered by solar energy for precision agriculture. Robotics 2020, 9, 13. [Google Scholar] [CrossRef]
  27. Ulrich, L.; Vezzetti, E.; Moos, S.; Marcolin, F. Analysis of RGB-D camera technologies for supporting different facial usage scenarios. Multimed. Tools Appl. 2020, 79, 29375–29398. [Google Scholar] [CrossRef]
  28. Tychola, K.A.; Tsimperidis, I.; Papakostas, G.A. On 3D Reconstruction Using RGB-D Cameras. Digital 2022, 2, 401–421. [Google Scholar] [CrossRef]
  29. Kurtser, P.; Lowry, S. RGB-D datasets for robotic perception in site-specific agricultural operations—A survey. Comput. Electron. Agric. 2023, 212, 108035. [Google Scholar] [CrossRef]
  30. Ram, B.G.; Oduor, P.; Igathinathane, C.; Howatt, K.; Sun, X. A systematic review of hyperspectral imaging in precision agriculture: Analysis of its current state and future prospects. Comput. Electron. Agric. 2024, 222, 109037. [Google Scholar] [CrossRef]
  31. Ishimwe, R.; Abutaleb, K.; Ahmed, F. Applications of Thermal Imaging in Agriculture—A Review. Adv. Remote Sens. 2014, 3, 128–140. [Google Scholar] [CrossRef]
  32. Thomson, S.J.; Ouellet-Plamondon, C.M.; DeFauw, S.L.; Huang, Y.; Fisher, D.K.; English, P.J. Potential and Challenges in Use of Thermal Imaging for Humid Region Irrigation System Management. J. Agric. Sci. 2012, 4, 103–115. [Google Scholar] [CrossRef]
  33. Xiang, L.; Wang, D. A review of three-dimensional vision techniques in food and agriculture applications. Smart Agric. Technol. 2023, 5, 100259. [Google Scholar] [CrossRef]
  34. Rivera, G.; Porras, R.; Florencia, R.; Sánchez-Solís, J.P. LiDAR applications in precision agriculture for cultivating crops: A review of recent advances. Comput. Electron. Agric. 2023, 207, 107737. [Google Scholar] [CrossRef]
  35. Radočaj, D.; Plaščak, I.; Jurišić, M. Global Navigation Satellite Systems as State-of-the-Art Solutions in Precision Agriculture: A Review of Studies Indexed in the Web of Science. Agriculture 2023, 13, 1417. [Google Scholar] [CrossRef]
  36. Upadhyay, A.; Zhang, Y.; Koparan, C.; Rai, N.; Howatt, K.; Bajwa, S.; Sun, X. Advances in ground robotic technologies for site-specific weed management in precision agriculture: A review. Comput. Electron. Agric. 2024, 225, 109363. [Google Scholar] [CrossRef]
  37. GNSS Products. Available online: https://drfasching.com/products/raspignss/ (accessed on 9 October 2024).
  38. Centimeter Precision GPS/GNSS—RTK Explained. Available online: https://www.ardusimple.com/rtk-explained/ (accessed on 9 October 2024).
  39. Yuanyuan, Z.; Bin, Z.; Cheng, S.; Haolu, L.; Jicheng, H.; Kunpeng, T.; Zhong, T. Review of the field environmental sensing methods based on multi-sensor information fusion technology. Int. J. Agric. Biol. Eng. 2024, 17, 1–13. [Google Scholar] [CrossRef]
  40. Liu, C.; Nguyen, B.K. Low-Cost Real-Time Localisation for Agricultural Robots in Unstructured Farm Environments. Machines 2024, 12, 612. [Google Scholar] [CrossRef]
  41. NVIDIA Isaac ROS. Available online: https://developer.nvidia.com/isaac/ros (accessed on 9 October 2024).
  42. ROS—Robot Operating System. Available online: https://dev.intelrealsense.com/docs/ros-wrapper (accessed on 9 October 2024).
  43. Cheng, C.; Fu, J.; Su, H.; Ren, L. Recent Advancements in Agriculture Robots: Benefits and Challenges. Machines 2023, 11, 48. [Google Scholar] [CrossRef]
  44. Yuan, S.; Wang, H.; Xie, L. Survey on Localization Systems and Algorithms for Unmanned Systems; World Scientific Pub Co Pte Ltd.: Singapore, 2023; pp. 145–179. [Google Scholar] [CrossRef]
  45. Emmi, L.; Fernández, R.; Gonzalez-de Santos, P.; Francia, M.; Golfarelli, M.; Vitali, G.; Sandmann, H.; Hustedt, M.; Wollweber, M. Exploiting the Internet Resources for Autonomous Robots in Agriculture. Agriculture 2023, 13, 1005. [Google Scholar] [CrossRef]
  46. Kitić, G.; Krklješ, D.; Panić, M.; Petes, C.; Birgermajer, S.; Crnojević, V. Agrobot Lala—An Autonomous Robotic System for Real-Time, In-Field Soil Sampling, and Analysis of Nitrates. Sensors 2022, 22, 4207. [Google Scholar] [CrossRef]
  47. Vulpi, F.; Marani, R.; Petitti, A.; Reina, G.; Milella, A. An RGB-D multi-view perspective for autonomous agricultural robots. Comput. Electron. Agric. 2022, 202, 107419. [Google Scholar] [CrossRef]
  48. Mohammadi, H.; Jiang, Z.; Nguyen, L. A Programmable Hybrid Simulation Environment for Coordination of Autonomous Vehicles. In Proceedings of the NAECON 2023-IEEE National Aerospace and Electronics Conference, Fairborn, OH, USA, 28–31 August 2023; pp. 36–41. [Google Scholar]
  49. Macenski, S.; Moore, T.; Lu, D.V.; Merzlyakov, A.; Ferguson, M. From the desks of ROS maintainers: A survey of modern and capable mobile robotics algorithms in the robot operating system 2. Robot. Auton. Syst. 2023, 168. [Google Scholar] [CrossRef]
  50. Sperti, M.; Ambrosio, M.; Martini, M.; Navone, A.; Ostuni, A.; Chiaberge, M. Non-linear Model Predictive Control for Multi-task GPS-free Autonomous Navigation in Vineyards. arXiv 2024, arXiv:2404.05343. [Google Scholar]
  51. Svyatov, K.; Rubtcov, I.; Ponomarev, A. Virtual testing ground for the development of control systems for unmanned vehicles in agriculture. E3S Web Conf. 2023, 458, 08018. [Google Scholar] [CrossRef]
  52. Mansur, H.; Welch, S.; Dempsey, L.; Flippo, D. Importance of Photo-Realistic and Dedicated Simulator in Agricultural Robotics. Engineering 2023, 15, 318–327. [Google Scholar] [CrossRef]
  53. Iqbal, J.; Xu, R.; Sun, S.; Li, C. Simulation of an Autonomous Mobile Robot for LiDAR-Based In-Field Phenotyping and Navigation. Robotics 2020, 9, 46. [Google Scholar] [CrossRef]
  54. Martini, M.; Cerrato, S.; Salvetti, F.; Angarano, S.; Chiaberge, M. Position-Agnostic Autonomous Navigation in Vineyards with Deep Reinforcement Learning. In Proceedings of the 2022 IEEE 18th International Conference on Automation Science and Engineering (CASE), Mexico City, Mexico, 20–24 August 2022. [Google Scholar] [CrossRef]
  55. Chatziparaschis, D.; Scudiero, E.; Karydis, K. Robot-assisted soil apparent electrical conductivity measurements in orchards. arXiv 2023, arXiv:2309.05128. [Google Scholar]
  56. Ramin Shamshiri, R.; Hameed, I.A.; Pitonakova, L.; Weltzien, C.; Balasundram, S.K.; Yule, I.J.; Grift, T.E.; Chowdhary, G. Simulation software and virtual environments for acceleration of agricultural robotics: Features highlights and performance comparison. Int. J. Agric. Biol. Eng. 2018, 11, 12–20. [Google Scholar] [CrossRef]
  57. Ribeiro, J.P.L.; Gaspar, P.D.; Soares, V.N.G.J.; Caldeira, J.M.L.P. Computational Simulation of an Agricultural Robotic Rover for Weed Control and Fallen Fruit Collection-Algorithms for Image Detection and Recognition and Systems Control, Regulation, and Command. Electronics 2022, 11, 790. [Google Scholar] [CrossRef]
  58. Berger, G.S.; Teixeira, M.; Cantieri, A.; Lima, J.; Pereira, A.I.; Valente, A.; Castro, G.G.R.d.; Pinto, M.F. Cooperative Heterogeneous Robots for Autonomous Insects Trap Monitoring System in a Precision Agriculture Scenario. Agriculture 2023, 13, 239. [Google Scholar] [CrossRef]
  59. Zhang, J.; Du, X.; Dong, Q.; Xin, B. Distributed Collaborative Complete Coverage Path Planning Based on Hybrid Strategy. J. Syst. Eng. Electron. 2024, 35, 463–472. [Google Scholar] [CrossRef]
  60. PyTorch. Available online: https://pytorch.org/ (accessed on 9 October 2024).
  61. TensorFlow. Available online: https://www.tensorflow.org/ (accessed on 9 October 2024).
  62. Nvidia CUDA. Available online: https://developer.nvidia.com/cuda-toolkit (accessed on 9 October 2024).
  63. Intel oneAPI. Available online: https://www.intel.com/content/www/us/en/developer/tools/oneapi/toolkits.html (accessed on 9 October 2024).
  64. AMD ROCm. Available online: https://www.amd.com/en/products/software/rocm.html (accessed on 9 October 2024).
  65. OpenCV—Open Computer Vision Library. Available online: https://opencv.org/ (accessed on 9 October 2024).
  66. Bah, M.D.; Hafiane, A.; Canals, R. CRowNet: Deep Network for Crop Row Detection in UAV Images. IEEE Access 2020, 8, 5189–5200. [Google Scholar] [CrossRef]
  67. Zhou, M.; Xia, J.; Yang, F.; Zheng, K.; Hu, M.; Li, D.; Zhang, S. Design and experiment of visual navigated UGV for orchard based on Hough matrix and RANSAC. Int. J. Agric. Biol. Eng. 2021, 14, 176–184. [Google Scholar] [CrossRef]
  68. Gehan, M.A.; Fahlgren, N.; Abbasi, A.; Berry, J.C.; Callen, S.T.; Chavez, L.; Doust, A.N.; Feldman, M.J.; Gilbert, K.B.; Hodge, J.G.; et al. PlantCV v2: Image analysis software for high-throughput plant phenotyping. PeerJ 2017, 5, e4088. [Google Scholar] [CrossRef]
  69. OpenMP. Available online: https://www.openmp.org/ (accessed on 9 October 2024).
  70. Nvidia GeForce RTX 40 Series Graphics Cards. Available online: https://www.nvidia.com/en-us/geforce/graphics-cards/40-series/ (accessed on 9 October 2024).
  71. NVIDIA Jetson for Next-Generation Robotics. Available online: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems (accessed on 9 October 2024).
  72. Coral USB Accelerator. Available online: https://coral.ai/products/accelerator (accessed on 9 October 2024).
  73. AMD Versal AI Edge Series VEK280 Evaluation Kit. Available online: https://www.xilinx.com/products/boards-and-kits/vek280.html (accessed on 9 October 2024).
  74. Vasconcelos, G.J.Q.; Costa, G.S.R.; Spina, T.V.; Pedrini, H. Low-Cost Robot for Agricultural Image Data Acquisition. Agriculture 2023, 13, 413. [Google Scholar] [CrossRef]
  75. Aguilera, C.A.; Figueroa-Flores, C.; Aguilera, C.; Navarrete, C. Comprehensive Analysis of Model Errors in Blueberry Detection and Maturity Classification: Identifying Limitations and Proposing Future Improvements in Agricultural Monitoring. Agriculture 2024, 14, 18. [Google Scholar] [CrossRef]
  76. Alibabaei, K.; Assunção, E.; Gaspar, P.D.; Soares, V.N.G.J.; Caldeira, J.M.L.P. Real-Time Detection of Vine Trunk for Robot Localization Using Deep Learning Models Developed for Edge TPU Devices. Future Internet 2022, 14, 199. [Google Scholar] [CrossRef]
  77. Budiyanta, N.E.; Sereati, C.O.; Manalu, F.R.G. Processing time increasement of non-rice object detection based on YOLOv3-tiny using Movidius NCS 2 on Raspberry Pi. Bull. Electr. Eng. Inform. 2022, 11, 1056–1061. [Google Scholar] [CrossRef]
  78. Routis, G.; Michailidis, M.; Roussaki, I. Plant Disease Identification Using Machine Learning Algorithms on Single-Board Computers in IoT Environments. Electronics 2024, 13, 1010. [Google Scholar] [CrossRef]
  79. Shende, K.; Sharda, A.; Hitzler, P. Hardware Design and Architecture of Multiagent Wireless Data Communication for Precision Agriculture Applications. SSRN 2023. [Google Scholar] [CrossRef]
  80. Mwitta, C.; Rains, G.C. The integration of GPS and visual navigation for autonomous navigation of an Ackerman steering mobile robot in cotton fields. Front. Robot. AI 2024, 11, 1359887. [Google Scholar] [CrossRef]
  81. Lyu, Z.; Lu, A.; Ma, Y. Improved YOLOv8-Seg Based on Multiscale Feature Fusion and Deformable Convolution for Weed Precision Segmentation. Appl. Sci. 2024, 14, 5002. [Google Scholar] [CrossRef]
  82. Edge TPU Performance Benchmarks. Available online: https://coral.ai/docs/edgetpu/benchmarks/ (accessed on 9 October 2024).
  83. Bringing Generative AI to Life with NVIDIA Jetson. 2023. Available online: https://developer.nvidia.com/blog/bringing-generative-ai-to-life-with-jetson/ (accessed on 9 October 2024).
  84. Lei, T.; Luo, C.; Jan, G.E.; Bi, Z. Deep Learning-Based Complete Coverage Path Planning With Re-Joint and Obstacle Fusion Paradigm. Front. Robot. AI 2022, 9, 843816. [Google Scholar] [CrossRef] [PubMed]
  85. Botta, A.; Moreno, E.; Baglieri, L.; Colucci, G.; Tagliavini, L.; Quaglia, G. Autonomous Driving System for Reversing an Articulated Rover for Precision Agriculture. Mech. Mach. Sci. 2022, 120 MMS, 412–419. [Google Scholar] [CrossRef]
  86. Tagarakis, A.C.; Filippou, E.; Kalaitzidis, D.; Benos, L.; Busato, P.; Bochtis, D. Proposing UGV and UAV Systems for 3D Mapping of Orchard Environments. Sensors 2022, 22, 1571. [Google Scholar] [CrossRef] [PubMed]
  87. Matsuzaki, S.; Miura, J.; Masuzawa, H. Multi-source pseudo-label learning of semantic segmentation for the scene recognition of agricultural mobile robots. Adv. Robot. 2022, 36, 1011–1029. [Google Scholar] [CrossRef]
  88. Matsuzaki, S.; Masuzawa, H.; Miura, J. Image-Based Scene Recognition for Robot Navigation Considering Traversable Plants and Its Manual Annotation-Free Training. IEEE Access 2022, 10, 5115–5128. [Google Scholar] [CrossRef]
  89. Ding, H.; Zhang, B.; Zhou, J.; Yan, Y.; Tian, G.; Gu, B. Recent developments and applications of simultaneous localization and mapping in agriculture. J. Field Robot. 2022, 39, 956–983. [Google Scholar] [CrossRef]
  90. Jiang, S.; Qi, P.; Han, L.; Liu, L.; Li, Y.; Huang, Z.; Liu, Y.; He, X. Navigation system for orchard spraying robot based on 3D LiDAR SLAM with NDT_ICP point cloud registration. Comput. Electron. Agric. 2024, 220, 108870. [Google Scholar] [CrossRef]
  91. Yan, Y.; Zhang, B.; Zhou, J.; Zhang, Y.; Liu, X. Real-Time Localization and Mapping Utilizing Multi-Sensor Fusion and Visual–IMU–Wheel Odometry for Agricultural Robots in Unstructured, Dynamic and GPS-Denied Greenhouse Environments. Agronomy 2022, 12, 1740. [Google Scholar] [CrossRef]
  92. Polvara, R.; Del Duchetto, F.; Neumann, G.; Hanheide, M. Navigate-and-Seek: A Robotics Framework for People Localization in Agricultural Environments. IEEE Robot. Autom. Lett. 2021, 6, 6577–6584. [Google Scholar] [CrossRef]
  93. Xu, R.; Li, C. A Review of High-Throughput Field Phenotyping Systems: Focusing on Ground Robots. Plant Phenomics 2022, 2022, 9760269. [Google Scholar] [CrossRef] [PubMed]
  94. Li, L.; Liang, H.; Wang, J.; Yang, J.; Li, Y. Online Routing for Autonomous Vehicle Cruise Systems with Fuel Constraints. J. Intell. Robot. Syst. Theory Appl. 2022, 104, 68. [Google Scholar] [CrossRef]
  95. Liu, L.; Wang, X.; Yang, X.; Liu, H.; Li, J.; Wang, P. Path planning techniques for mobile robots: Review and prospect. Expert Syst. Appl. 2023, 227, 120254. [Google Scholar] [CrossRef]
  96. Shi, C.; Xiong, Z.; Chen, M.; Wang, R.; Xiong, J. Cooperative Navigation for Heterogeneous Air-Ground Vehicles Based on Interoperation Strategy. Remote Sens. 2023, 15, 2006. [Google Scholar] [CrossRef]
  97. Sevastopoulos, C.; Konstantopoulos, S. A Survey of Traversability Estimation for Mobile Robots. IEEE Access 2022, 10, 96331–96347. [Google Scholar] [CrossRef]
  98. Gonzalez, D.; Perez, J.; Milanes, V.; Nashashibi, F. A Review of Motion Planning Techniques for Automated Vehicles. IEEE Trans. Intell. Transp. Syst. 2016, 17, 1135–1145. [Google Scholar] [CrossRef]
  99. Pak, J.; Kim, J.; Park, Y.; Son, H.I. Field Evaluation of Path-Planning Algorithms for Autonomous Mobile Robot in Smart Farms. IEEE Access 2022, 10, 60253–60266. [Google Scholar] [CrossRef]
  100. Jakubczyk, K.; Siemiątkowska, B.; Więckowski, R.; Rapcewicz, J. Hyperspectral Imaging for Mobile Robot Navigation. Sensors 2023, 23, 383. [Google Scholar] [CrossRef]
  101. Chakraborty, S.; Elangovan, D.; Govindarajan, P.L.; ELnaggar, M.F.; Alrashed, M.M.; Kamel, S. A Comprehensive Review of Path Planning for Agricultural Ground Robots. Sustainability 2022, 14, 9156. [Google Scholar] [CrossRef]
  102. Wang, H.; Li, G.; Hou, J.; Chen, L.; Hu, N. A Path Planning Method for Underground Intelligent Vehicles Based on an Improved RRT* Algorithm. Electronics 2022, 11, 294. [Google Scholar] [CrossRef]
  103. Noreen, I.; Khan, A.; Habib, Z. Optimal path planning using RRT* based approaches: A survey and future directions. Int. J. Adv. Comput. Sci. Appl. 2016, 7. Available online: https://thesai.org/Publications/ViewPaper?Volume=7&Issue=11&Code=IJACSA&SerialNo=14 (accessed on 9 October 2024). [CrossRef]
  104. Zadeh, N.; Hashimoto, H.; Raper, D.; Tanuyan, E.; Bruca, M. Autonomous smart farming system using FLANN-based feature matcher with robotic arm. Proc. Aip Conf. Proc. 2022, 2502, 040004. [Google Scholar] [CrossRef]
  105. Darwin, B.; Dharmaraj, P.; Prince, S.; Popescu, D.E.; Hemanth, D.J. Recognition of Bloom/Yield in Crop Images Using Deep Learning Models for Smart Agriculture: A Review. Agronomy 2021, 11, 646. [Google Scholar] [CrossRef]
  106. Subeesh, A.; Mehta, C. Automation and digitization of agriculture using artificial intelligence and internet of things. Artif. Intell. Agric. 2021, 5, 278–291. [Google Scholar] [CrossRef]
  107. Lowe, T.; Moghadam, P.; Edwards, E.; Williams, J. Canopy density estimation in perennial horticulture crops using 3D spinning lidar SLAM. J. Field Robot. 2021, 38, 598–618. [Google Scholar] [CrossRef]
  108. Giubilato, R.; Sturzl, W.; Wedler, A.; Triebel, R. Challenges of SLAM in Extremely Unstructured Environments: The DLR Planetary Stereo, Solid-State LiDAR, Inertial Dataset. IEEE Robot. Autom. Lett. 2022, 7, 8721–8728. [Google Scholar] [CrossRef]
  109. Galati, R.; Mantriota, G.; Reina, G. RoboNav: An Affordable Yet Highly Accurate Navigation System for Autonomous Agricultural Robots. Robotics 2022, 11, 99. [Google Scholar] [CrossRef]
  110. Abdelaziz, S.I.K.; Elghamrawy, H.Y.; Noureldin, A.M.; Fotopoulos, G. Body-Centered Dynamically-Tuned Error-State Extended Kalman Filter for Visual Inertial Odometry in GNSS-Denied Environments. IEEE Access 2024, 12, 15997–16008. [Google Scholar] [CrossRef]
  111. Zhao, Z.; Zhang, Y.; Shi, J.; Long, L.; Lu, Z. Robust Lidar-Inertial Odometry with Ground Condition Perception and Optimization Algorithm for UGV. Sensors 2022, 22, 7424. [Google Scholar] [CrossRef]
  112. Rosero-Montalvo, P.D.; Gordillo-Gordillo, C.A.; Hernandez, W. Smart Farming Robot for Detecting Environmental Conditions in a Greenhouse. IEEE Access 2023, 11, 57843–57853. [Google Scholar] [CrossRef]
  113. Kamil, F.; Gburi, F.H.; Kadhom, M.A.; Kalaf, B.A. Fuzzy Logic-Based Control for Intelligent Vehicles: A Survey. Proc. Aip Conf. Proc. 2024, 3092. [Google Scholar] [CrossRef]
  114. Wang, T.; Wang, H.; Hu, H.; Lu, X.; Zhao, S. An adaptive fuzzy PID controller for speed control of brushless direct current motor. SN Appl. Sci. 2022, 4, 71. [Google Scholar] [CrossRef]
  115. Kägo, R.; Vellak, P.; Karofeld, E.; Noorma, M.; Olt, J. Assessment of using state of the art unmanned ground vehicles for operations on peat fields. Mires Peat 2021, 27, 11. [Google Scholar] [CrossRef]
  116. Simmons, A.; Chappell, S. Artificial intelligence-definition and practice. IEEE J. Ocean. Eng. 1988, 13, 14–42. [Google Scholar] [CrossRef]
  117. Kok, J.N.; Boers, E.J.; Kosters, W.A.; Van der Putten, P.; Poel, M. Artificial intelligence: Definition, trends, techniques, and cases. Artif. Intell. 2009, 1, 51. [Google Scholar]
  118. Nie, J.; Wang, Y.; Li, Y.; Chao, X. Artificial intelligence and digital twins in sustainable agriculture and forestry: A survey. Turk. J. Agric. For. 2022, 46, 642–661. [Google Scholar] [CrossRef]
  119. Ojo, M.O.; Zahid, A. Deep Learning in Controlled Environment Agriculture: A Review of Recent Advancements, Challenges and Prospects. Sensors 2022, 22, 7965. [Google Scholar] [CrossRef]
  120. Tian, H.; Wang, T.; Liu, Y.; Qiao, X.; Li, Y. Computer vision technology in agricultural automation—A review. Inf. Process. Agric. 2020, 7, 1–19. [Google Scholar] [CrossRef]
  121. Linaza, M.T.; Posada, J.; Bund, J.; Eisert, P.; Quartulli, M.; Döllner, J.; Pagani, A.; Olaizola, I.G.; Barriguinha, A.; Moysiadis, T.; et al. Data-Driven Artificial Intelligence Applications for Sustainable Precision Agriculture. Agronomy 2021, 11, 1227. [Google Scholar] [CrossRef]
  122. Bini, D.; Pamela, D.; Prince, S. Machine Vision and Machine Learning for Intelligent Agrobots: A review. In Proceedings of the 2020 5th International Conference on Devices, Circuits and Systems (ICDCS), Coimbatore, India, 5–6 March 2020. [Google Scholar] [CrossRef]
  123. Wang, H.; Gu, J.; Wang, M. A review on the application of computer vision and machine learning in the tea industry. Front. Sustain. Food Syst. 2023, 7, 1172543. [Google Scholar] [CrossRef]
  124. Mohimont, L.; Alin, F.; Rondeau, M.; Gaveau, N.; Steffenel, L.A. Computer Vision and Deep Learning for Precision Viticulture. Agronomy 2022, 12, 2463. [Google Scholar] [CrossRef]
  125. Kumar, G.K.; Bangare, M.L.; Bangare, P.M.; Kumar, C.R.; Raj, R.; Arias-Gonzáles, J.L.; Omarov, B.; Mia, M.S. Internet of things sensors and support vector machine integrated intelligent irrigation system for agriculture industry. Discov. Sustain. 2024, 5, 6. [Google Scholar] [CrossRef]
  126. Bishnoi, S.; Hooda, B.K. Decision Tree Algorithms and their Applicability in Agriculture for Classification. J. Exp. Agric. Int. 2022, 44, 20–27. [Google Scholar] [CrossRef]
  127. Sapkal, K.G.; Kadam, A.B. Random Forest Classifier For Crop Prediction Based On Soil Data. J. Adv. Zool. 2024, 45, 113–117. [Google Scholar] [CrossRef]
  128. Bhanu Koduri, S.; Gunisetti, L.; Raja Ramesh, C.; Mutyalu, K.V.; Ganesh, D. Prediction of crop production using adaboost regression method. J. Phys. Conf. Ser. 2019, 1228, 012005. [Google Scholar] [CrossRef]
  129. Liu, R.; Yandun, F.; Kantor, G. LiDAR-Based Crop Row Detection Algorithm for Over-Canopy Autonomous Navigation in Agriculture Fields. arXiv 2024, arXiv:2403.17774. [Google Scholar]
  130. Mokssit, S.; Licea, D.B.; Guermah, B.; Ghogho, M. Deep Learning Techniques for Visual SLAM: A Survey. IEEE Access 2023, 11, 20026–20050. [Google Scholar] [CrossRef]
  131. Wang, K.; Ma, S.; Chen, J.; Ren, F.; Lu, J. Approaches, Challenges, and Applications for Deep Visual Odometry: Toward Complicated and Emerging Areas. IEEE Trans. Cogn. Dev. Syst. 2022, 14, 35–49. [Google Scholar] [CrossRef]
  132. Pathan, M.; Patel, N.; Yagnik, H.; Shah, M. Artificial cognition for applications in smart agriculture: A comprehensive review. Artif. Intell. Agric. 2020, 4, 81–95. [Google Scholar] [CrossRef]
  133. Sujatha, K.; Reddy, T.K.; Bhavani, N.; Ponmagal, R.; Srividhya, V.; Janaki, N. UGVs for Agri Spray with AI assisted Paddy Crop disease Identification. Proc. Procedia Comput. Sci. 2023, 230, 70–81. [Google Scholar] [CrossRef]
  134. Wang, K.; Kooistra, L.; Pan, R.; Wang, W.; Valente, J. UAV-based simultaneous localization and mapping in outdoor environments: A systematic scoping review. J. Field Robot. 2024, 41, 1617–1642. [Google Scholar] [CrossRef]
  135. Gharakhani, H.; Thomasson, J.A. Evaluating object detection and stereoscopic localization of a robotic cotton harvester under real field conditions. In SPIE—The International Society for Optical Engineering; SPIE Defense + Commercial Sensing: Orlando, FL, USA, 2023; Volume 12539. [Google Scholar] [CrossRef]
  136. Akter, R.; Islam, M.S.; Sohan, K.; Ahmed, M.I. Insect Recognition and Classification Using Optimized Densely Connected Convolutional Neural Network. Lect. Notes Netw. Syst. 2023, 624, 251–264. [Google Scholar] [CrossRef]
  137. Stefanović, D.; Antić, A.; Otlokan, M.; Ivošević, B.; Marko, O.; Crnojević, V.; Panić, M. Blueberry Row Detection Based on UAV Images for Inferring the Allowed UGV Path in the Field. Lect. Notes Netw. Syst. 2023, 590, 401–411. [Google Scholar] [CrossRef]
  138. Thapa, S.; Rains, G.C.; Porter, W.M.; Lu, G.; Wang, X.; Mwitta, C.; Virk, S.S. Robotic Multi-Boll Cotton Harvester System Integration and Performance Evaluation. AgriEngineering 2024, 6, 803–822. [Google Scholar] [CrossRef]
  139. Park, Y.H.; Choi, S.H.; Kwon, Y.J.; Kwon, S.W.; Kang, Y.J.; Jun, T.H. Detection of Soybean Insect Pest and a Forecasting Platform Using Deep Learning with Unmanned Ground Vehicles. Agronomy 2023, 13, 477. [Google Scholar] [CrossRef]
  140. Huang, P.; Huang, P.; Wang, Z.; Wu, X.; Liu, J.; Zhu, L. Deep-Learning-Based Trunk Perception with Depth Estimation and DWA for Robust Navigation of Robotics in Orchards. Agronomy 2023, 13, 1084. [Google Scholar] [CrossRef]
  141. Lacotte, V.; NGuyen, T.; Sempere, J.D.; Novales, V.; Dufour, V.; Moreau, R.; Pham, M.T.; Rabenorosoa, K.; Peignier, S.; Feugier, F.G.; et al. Pesticide-Free Robotic Control of Aphids as Crop Pests. AgriEngineering 2022, 4, 903–921. [Google Scholar] [CrossRef]
  142. Khan, M.S.A.; Hussian, D.; Ali, Y.; Rehman, F.U.; Aqeel, A.B.; Khan, U.S. Multi-Sensor SLAM for efficient Navigation of a Mobile Robot. In Proceedings of the 2021 IEEE 4th International Conference on Computing and Information Sciences, ICCIS 2021, Karachi, Pakistan, 29–30 November 2021. [Google Scholar] [CrossRef]
  143. Nourizadeh, P.; Stevens McFadden, F.J.; Browne, W.N. In situ slip estimation for mobile robots in outdoor environments. J. Field Robot. 2023, 40, 467–482. [Google Scholar] [CrossRef]
  144. Liu, C.; Zhao, J.; Sun, N. A Review of Collaborative Air-Ground Robots Research. J. Intell. Robot. Syst. Theory Appl. 2022, 106, 60. [Google Scholar] [CrossRef]
  145. Wang, C.; Wang, J.; Wei, C.; Zhu, Y.; Yin, D.; Li, J. Vision-Based Deep Reinforcement Learning of UAV-UGV Collaborative Landing Policy Using Automatic Curriculum. Drones 2023, 7, 676. [Google Scholar] [CrossRef]
  146. Blais, M.A.; Akhloufi, M.A. Reinforcement learning for swarm robotics: An overview of applications, algorithms and simulators. Cogn. Robot. 2023, 3, 226–256. [Google Scholar] [CrossRef]
  147. Xiao, Q.; Li, Y.; Luo, F.; Liu, H. Analysis and assessment of risks to public safety from unmanned aerial vehicles using fault tree analysis and Bayesian network. Technol. Soc. 2023, 73, 102229. [Google Scholar] [CrossRef]
  148. Altalak, M.; Ammad uddin, M.; Alajmi, A.; Rizg, A. Smart Agriculture Applications Using Deep Learning Technologies: A Survey. Appl. Sci. 2022, 12, 5919. [Google Scholar] [CrossRef]
  149. Farjon, G.; Huijun, L.; Edan, Y. Deep-Learning-based Counting Methods, Datasets, and Applications in Agriculture—A Review. arXiv 2023, arXiv:2303.02632. [Google Scholar]
  150. Lu, Y.; Young, S. A survey of public datasets for computer vision tasks in precision agriculture. Comput. Electron. Agric. 2020, 178, 105760. [Google Scholar] [CrossRef]
  151. Chen, J.; Liu, H.; Zhang, Y.; Zhang, D.; Ouyang, H.; Chen, X. A Multiscale Lightweight and Efficient Model Based on YOLOv7: Applied to Citrus Orchard. Plants 2022, 11, 3260. [Google Scholar] [CrossRef]
  152. Samson Adekunle, T.; Oladayo Lawrence, M.; Omotayo Alabi, O.; Afolorunso, A.A.; Nse Ebong, G.; Abiola Oladipupo, M. Deep learning technique for plant disease detection. Comput. Sci. Inf. Technol. 2024, 5, 55–62. [Google Scholar] [CrossRef]
  153. Yu, F.; Wang, M.; Xiao, J.; Zhang, Q.; Zhang, J.; Liu, X.; Ping, Y.; Luan, R. Advancements in Utilizing Image-Analysis Technology for Crop-Yield Estimation. Remote Sens. 2024, 16, 1003. [Google Scholar] [CrossRef]
  154. Singh, M.; Fuenmayor, E.; Hinchy, E.; Qiao, Y.; Murray, N.; Devine, D. Digital Twin: Origin to Future. Appl. Syst. Innov. 2021, 4, 36. [Google Scholar] [CrossRef]
  155. Verdouw, C.; Tekinerdogan, B.; Beulens, A.; Wolfert, S. Digital twins in smart farming. Agric. Syst. 2021, 189, 103046. [Google Scholar] [CrossRef]
  156. Tomczyk, M.; van der Valk, H. Digital Twin Paradigm Shift: The Journey of the Digital Twin Definition. Proc. ICEIS 2022, 2, 90–97. [Google Scholar]
  157. Agrawal, A.; Fischer, M.; Singh, V. Digital Twin: From Concept to Practice. arXiv 2022, arXiv:2201.06912. [Google Scholar]
  158. Skobelev, P.; Laryukhin, V.; Simonova, E.; Goryanin, O.; Yalovenko, V.; Yalovenko, O. Multi-agent approach for developing a digital twin of wheat. In Proceedings of the 2020 IEEE International Conference on Smart Computing (SMARTCOMP), Bologna, Italy, 14–17 September 2020. [Google Scholar] [CrossRef]
  159. Alves, R.G.; Souza, G.; Maia, R.F.; Tran, A.L.H.; Kamienski, C.; Soininen, J.P.; Aquino, P.T.; Lima, F. A digital twin for smart farming. In Proceedings of the 2019 IEEE Global Humanitarian Technology Conference (GHTC), Seattle, WA, USA, 17–20 October 2019. [Google Scholar] [CrossRef]
  160. Han, J.B.; Kim, S.S.; Song, H.J. Development of real-time digital twin model of autonomous field robot for prediction of vehicle stability. J. Inst. Control Robot. Syst. 2021, 27, 190–196. [Google Scholar] [CrossRef]
  161. Cesco, S.; Sambo, P.; Borin, M.; Basso, B.; Orzes, G.; Mazzetto, F. Smart agriculture and digital twins: Applications and challenges in a vision of sustainability. Eur. J. Agron. 2023, 146, 126809. [Google Scholar] [CrossRef]
  162. Malik, P.; Sneha; Garg, D.; Bedi, H.; Gehlot, A.; Malik, P.K. An Improved Agriculture Farming Through the Role of Digital Twin. In Proceedings of the 2023 4th International Conference on Electronics and Sustainable Communication Systems (ICESC), Coimbatore, India, 6–8 July 2023. [Google Scholar]
  163. Nair, M.; Dede, O.L.; De, S.; Fernandez, R.E. Digital Twin for Bruise Detection in Precision Agriculture. In Proceedings of the 2024 IEEE 21st Consumer Communications & Networking Conference (CCNC), Las Vegas, NV, USA, 6–9 January 2024. [Google Scholar]
  164. Weckesser, F.; Beck, M.; Hülsbergen, K.J.; Peisl, S. A Digital Advisor Twin for Crop Nitrogen Management. Agriculture 2022, 12, 302. [Google Scholar] [CrossRef]
  165. Banić, M.; Simonović, M.; Stojanović, L.; Rangelov, D.; Miltenović, A.; Perić, M. Digital twin based lightweighting of robot unmanned ground vehicles. Facta Univ. Ser. Autom. Control Robot. 2022, 1, 187–199. [Google Scholar] [CrossRef]
  166. Tsolakis, N.; Bechtsis, D.; Bochtis, D. Agros: A robot operating system based emulation tool for agricultural robotics. Agronomy 2019, 9, 403. [Google Scholar] [CrossRef]
Figure 1. On the left, the bibliographic dataset we collected, categorized by topic (DT stands for ‘digital twin’). The sum of works in each category exceeds the total entries, as resources may belong to multiple groups. On the right, the distribution of publications over time.
Figure 1. On the left, the bibliographic dataset we collected, categorized by topic (DT stands for ‘digital twin’). The sum of works in each category exceeds the total entries, as resources may belong to multiple groups. On the right, the distribution of publications over time.
Machines 12 00750 g001
Figure 2. On the left: a typical robot development platform for agricultural monitoring. On the right: a simplified diagram of the main monitoring and navigation components (dimensions are in mm). Sensors are typically mounted on an external frame atop the UGV, offering a ‘human-like’ perspective and easy access to the devices. In many cases, two or more cameras are installed, facing the left and right crops. The actual number and placement of devices on the UGV may vary; the diagram is for conceptual purposes only. The image and diagram (modified by the authors) were taken from [24] with preliminary authorization.
Figure 2. On the left: a typical robot development platform for agricultural monitoring. On the right: a simplified diagram of the main monitoring and navigation components (dimensions are in mm). Sensors are typically mounted on an external frame atop the UGV, offering a ‘human-like’ perspective and easy access to the devices. In many cases, two or more cameras are installed, facing the left and right crops. The actual number and placement of devices on the UGV may vary; the diagram is for conceptual purposes only. The image and diagram (modified by the authors) were taken from [24] with preliminary authorization.
Machines 12 00750 g002
Figure 3. Dataflow architecture of UGVs for agricultural monitoring. The boxes on the right (‘Field Operations Controllers’ and ‘Actuators’) represent the potential integration of field treatment functionalities, an aspect that is beyond the scope of this review.
Figure 3. Dataflow architecture of UGVs for agricultural monitoring. The boxes on the right (‘Field Operations Controllers’ and ‘Actuators’) represent the potential integration of field treatment functionalities, an aspect that is beyond the scope of this review.
Machines 12 00750 g003
Figure 4. Prevalence and distribution of hardware ICT components for monitoring UGVs. This study examines the number of scientific publications from the last 10 years, as investigated using Google Scholar. On the left: sensors. On the right: computational devices (CPUs are ubiquitous and therefore are not included in the statistics).
Figure 4. Prevalence and distribution of hardware ICT components for monitoring UGVs. This study examines the number of scientific publications from the last 10 years, as investigated using Google Scholar. On the left: sensors. On the right: computational devices (CPUs are ubiquitous and therefore are not included in the statistics).
Machines 12 00750 g004
Table 1. Commercial outdoor RGB-D cameras particularly suited for crop surveillance.
Table 1. Commercial outdoor RGB-D cameras particularly suited for crop surveillance.
MFR 1ModelTech 1Shutter 1Max res 1Range (m) 1FoV 1Intf 1Prot 1
OrbbecGemini 336SVRolling 1280 × 800 @30 fps0.1–1090° × 65°USB3.0 Type C IP5X
OrbbecGemini 336LSVGlobal 1280 × 800 @30 fps0.17–1094° × 68°USB3.0 Type C IP65
OrbbecFemto Mega IToFRolling 1024 × 1024 @10 fps0.25–3.8680° × 51°Ethernet IP65
OrbbecFemto BoltToFRolling 1024 × 1024 @15 fps0.25–5.4680° × 51°USB3.2 Type Cn/a
OrbbecAstra 2SLRolling 1600 × 100 @30 fps0.6–875° × 36°USB3.0 Type Cn/a
IntelRS ZR300SVRolling 1920 × 1080 @30 fps0.55–2.859° × 46°USB3.0n/a
SickV3S146SVGlobal 1024 × 576 0.28–16130° × 105°Ethernetn/a
@30 fps0.65–3790° × 60°
VzenseNYX660ToFGlobal 1600 × 1200 @15 fps0.3–4.570° × 50°Ethernet, RS485 IP67
Baslerblaze-102ToFGlobal 640 × 480 @30 fps0.3–1067° ×51°Ethernet IP67
StereoLabsZED 2iSVRolling 2208 × 1242 0.3–20 @f/2120°USB IP66
@15 fps1.5–35 @f/1.8 type C
SipeedMaixSense A075ToFRolling 1600 × 1200 @30 fps0.15–1.5120°USB2.0 Type Cn/a
eConTara Stereo VisionSVGlobal 752 × 480 @60 fps (monochrome)0.5–3n/aUSB3n/a
LucidHelios2 Ray OutdoorToFGlobal 640 × 480 @30 fps0.3–8.33n/aEthernet IP67
1 MFR = “manufacturer”; Tech = “technology”; Shutter = “shutter type (RGB)”; Max res = “maximum RGB resolution”; Range = “measuring range”; FoV = “Field of View (H × V)”; Intf = “interface”; Prot = “protection”.
Table 2. Commercial LiDARs particularly suited for crop surveillance.
Table 2. Commercial LiDARs particularly suited for crop surveillance.
MFR 1ModelHR (deg) 1VR (deg) 1SR (Hz) 1MR (m) 1FoV (deg) 1Acc (cm) 1Intf 1Prot 1
VelodynePuck VLP-160.1–0.425–2010030 × 3603ethernet IP67
Ultra Puck VLP-32c0.1–0.40.335–2020040 × 3603ethernet IP67
HDL-320.08–0.331.335–2010041 × 3602ethernet IP67
Alpha Prime0.1–0.40.115–2024540 × 3603ethernet IP67
InnovizOne0.10.110–1525025 × 1153MIPI CSI-2 IP6K9K
Two0.050.052030043 × 1201MIPI CSI-2 IP6K9K
Leddar TechPixelln/an/a205616 × 1783ethernet IP67
Quanergy SystemsM8-Prime Ultra0.0330.0335–2020020 × 3603ethernet IP69K
RIEGLVUX-1HA0.001 ( * ) 2501503600.5ethernet, USB IP64
Sick AGLMS5110.17 ( * ) 25–100801900.24ethernet IP67
( * ) 2D LiDAR. 1 MFR = “manufacturer”; HR = “horizontal resolution”; VR = “vertical resolution”; SR = “scan rate”; MR = “maximum range”; FoV = “Field of View (V × H)”; Acc = “accuracy”; Intf = “interface”; Prot = “protection”.
Table 3. Main supported features and bibliographic references of simulation environments.
Table 3. Main supported features and bibliographic references of simulation environments.
Simulation EnvironmentRobot TypesSensorsOS 1Programming Languages and EnvironmentsRef. 1
GazeboUAV, UGV, cars, boatsRGB, IR, LiDAR, IMU, GNSS, Force, Distance, Altimeter, Depth, RFIDLinux, Mac OS, WindowsPython, C++ and Java through ROS[47,49,50,53,54,55,56]
CoppeliaSimVarious industrial robots, ground vehiclesLiDAR, GNSS, IMU, RGB, Infrared, ProximityLinux, Mac OS, WindowsC++, Python, ROS[56,57,58]
WebotsUAV, UGV, cars, humanoidsLiDAR, GNSS, IMU, RGBLinux, Mac OS, WindowsC++, Python, Java, MATLAB, ROS[51,56]
Isaac SimIndustrial robots, Ground robotsLiDAR, GNSS, IMU, Infrared, RGB-DLinux, WindowsC, C++, Python, Java, Lua, MATLAB, Octave[59]
1 OS = “operating system”; Ref. = “bibliographic reference”.
Table 4. Comparison of GPUs, SoMs/SoCs, one ASIC commonly used for AI inference, and one reference FPGA. For each model, the highest-performing version has been considered, with the exception of the Nvidia Orin Nano. TOPS: tera operations per second. Data for the Nvidia GeForce, Nvidia Jetson, Coral USB and FPGA were taken from [70,71,72,73]. Rugged versions are available at an increased cost. The last row refers to newly announced consumer CPUs (SoMs, System-on-a-Module) expected to arrive on the market soon, with the Intel Core Ultra 9 used as a term of comparison.
Table 4. Comparison of GPUs, SoMs/SoCs, one ASIC commonly used for AI inference, and one reference FPGA. For each model, the highest-performing version has been considered, with the exception of the Nvidia Orin Nano. TOPS: tera operations per second. Data for the Nvidia GeForce, Nvidia Jetson, Coral USB and FPGA were taken from [70,71,72,73]. Rugged versions are available at an increased cost. The last row refers to newly announced consumer CPUs (SoMs, System-on-a-Module) expected to arrive on the market soon, with the Intel Core Ultra 9 used as a term of comparison.
ClassDeviceAI (TOPS) 1TDP (W) 1RAM (GB) 1Cost (€)
GPUNvidia 406035316016 (GDDR6)430
Nvidia 407070628516 (GDDR6X) 800
Nvidia 408083632016 (GDDR6X) 1050
Nvidia 4090132145024 (GDDR6X) 1800
SoM/SoCNvidia Jetson Nano0.5104 (LPDDR4)240
Nvidia Jetson TX21.3208 (LPDDR4) 420
Nvidia Jetson Xavier212016 (LPDDR4X) 720
Nvidia Jetson AGX Orin2756064 (LPDDR5) 2250
Nvidia Jetson Orin Nano40158 (LPDDR5)550
TPUCoral TPU42- 80
FPGAAMD Versal AI Edge VEK280114 (a)75126900
SoMNew CPUs Q4 2024up to 12017–3716/32-
1 AI (TOPS): maximum performance with AI tasks; TDP (W): thermal design power, maximum amount of heat that the device can dissipate under normal operating conditions; RAM (GB): amount and type of RAM memory; cost (€): approximate device cost as of 30 June 2024. (a) BFLOAT16, extrapolated.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Agelli, M.; Corona, N.; Maggio, F.; Moi, P.V. Unmanned Ground Vehicles for Continuous Crop Monitoring in Agriculture: Assessing the Readiness of Current ICT Technology. Machines 2024, 12, 750. https://doi.org/10.3390/machines12110750

AMA Style

Agelli M, Corona N, Maggio F, Moi PV. Unmanned Ground Vehicles for Continuous Crop Monitoring in Agriculture: Assessing the Readiness of Current ICT Technology. Machines. 2024; 12(11):750. https://doi.org/10.3390/machines12110750

Chicago/Turabian Style

Agelli, Maurizio, Nicola Corona, Fabio Maggio, and Paolo Vincenzo Moi. 2024. "Unmanned Ground Vehicles for Continuous Crop Monitoring in Agriculture: Assessing the Readiness of Current ICT Technology" Machines 12, no. 11: 750. https://doi.org/10.3390/machines12110750

APA Style

Agelli, M., Corona, N., Maggio, F., & Moi, P. V. (2024). Unmanned Ground Vehicles for Continuous Crop Monitoring in Agriculture: Assessing the Readiness of Current ICT Technology. Machines, 12(11), 750. https://doi.org/10.3390/machines12110750

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop