Sensing and Artificial Perception for Robots in Precision Forestry: A Survey
<p>Distribution of surveyed works from 2018–2023 according to application area.</p> "> Figure 2
<p>The <span class="html-italic">Ranger</span> landscape maintenance robot developed in the SEMFIRE project. For more details, please refer to [<a href="#B32-robotics-12-00139" class="html-bibr">32</a>].</p> "> Figure 3
<p>SEMFIRE <span class="html-italic">Scout</span> UAV platform on the left. Illustrative deployment of the SEMFIRE solution on the right: (1) the heavy-duty, multi-purpose <span class="html-italic">Ranger</span> can autonomously mulch down the thickest brushes as well as cutting down small trees to reduce the risk of wildfires; (2) the area is explored (finding new regions of interest for landscaping) and patrolled (checking the state of these regions of interest) by <span class="html-italic">Scouts</span>, with the additional task of estimating the pose of each other and the <span class="html-italic">Ranger</span>, and supervising the area for external elements (e.g., living beings).</p> "> Figure 4
<p>The RHEA robot fleet on a wheat spraying mission. RHEA focused on the development of novel techniques for weed management in agriculture and forestry, mainly through the usage of heterogeneous robotic teams, involving autonomous tractors and Unmanned Aerial Vehicles (UAVs). Reproduced with permission.</p> "> Figure 5
<p>An illustration of an autonomous robot performing landscaping on a young forest. The circled trees are the mainstems, and should be kept, while the others are to be cut. Reproduced from [<a href="#B51-robotics-12-00139" class="html-bibr">51</a>] with permission.</p> "> Figure 6
<p>The Sweeper robot (<b>a</b>), a sweet-pepper harvesting robot operating in a greenhouse. (<b>b</b>): the output of Sweeper’s pepper detection technique. The Sweeper project aims to develop an autonomous harvesting robot, based on the developments of the CROPS project, which can operate in real-world conditions. Reproduced with permission. Source: <a href="http://www.sweeper-robot.eu" target="_blank">www.sweeper-robot.eu</a>, accessed on 30 June 2023.</p> "> Figure 7
<p>Sensing challenges for forestry robotics.</p> "> Figure 8
<p>RGB (left) and thermal (right) images of bushes and shrubbery captured using a thermal camera. A variation of about 7 °C exists in the heating distribution in the thermal image. Such a temperature variation will have an impact on the overall plant water stress and, therefore, on its health.</p> "> Figure 9
<p>Example output of a semantic segmentation model applied to the robotic perception pipeline, designed to perform landscaping in woodlands to reduce the amount of living flammable material (aka “Fuel”) for wildfire prevention presented in [<a href="#B157-robotics-12-00139" class="html-bibr">157</a>]. The ground-truth image is shown on the left and the corresponding prediction is on the right. The model takes multispectral images as inputs and the classes used for segmentation and respective colour-coding are as follows: “Background” (black), “Fuel” (red), “Canopies” (green), “Trunks” (brown; not present in this example), “Humans” (yellow) and “Animals” (purple). The model consists of an AdapNet++ backbone, an eASPP progressive decoder, and fine-tuning trained on Bonnetal, using ImageNet pre-weights for the whole model.</p> "> Figure 10
<p>Example of the results of semantic segmentation when applied directly to a raw point cloud. The top image shows the original point cloud and the bottom image shows the result of semantic segmentation [<a href="#B188-robotics-12-00139" class="html-bibr">188</a>], considering eight different classes (most of which are represented in the example).</p> "> Figure 11
<p>Depth completion FCN, called ENet [<a href="#B195-robotics-12-00139" class="html-bibr">195</a>], applied to a synthetic forestry dataset [<a href="#B196-robotics-12-00139" class="html-bibr">196</a>]. The sparse-depth image shown on the top right is generated by projecting points from a point cloud produced by a (simulated) LiDAR sensor onto the image space of the camera producing the RGB image shown on the top left; since the LiDAR sensor is tilted slightly downwards to prioritise ground-level plants, only the bottom half of the image includes depth information from the point cloud. The depth completion method, which uses both information from the RGB image and the sparse-depth image as inputs to estimate the corresponding dense depth image, produces the output shown on the bottom left, with the ground-truth dense depth image shown on the bottom right for comparison.</p> "> Figure 12
<p>Overview diagram of a data augmentation process; from [<a href="#B248-robotics-12-00139" class="html-bibr">248</a>]. Data from a specific domain is forwarded into a data augmentation unit, potentially curated by a human expert, which in turn produces an augmented dataset containing the original data and new artificially generated samples.</p> "> Figure 13
<p>GAN image translation training from [<a href="#B251-robotics-12-00139" class="html-bibr">251</a>] to generate corresponding NIR channels of multispectral images with an original multispectral image (left image); a model ground truth image, which the model attempts to predict (centre, top image); a green channel image, which is part of the model input image (centre, second image from the top); a semantic segmentation image, where its label values are part of the model input image (centre, third image from the top) and a red channel image, which is part of the model input image (centre, last image).</p> "> Figure 14
<p>GAN image translation generation from [<a href="#B251-robotics-12-00139" class="html-bibr">251</a>] of synthetic NIR channel and corresponding final “fake” multispectral image from a fully annotated RGB input image (on the left); a green channel image, which is part of the model input image and is fed forward to be merged after generation with the synthetic NIR channel (centre, top image and right, top image); a semantic segmentation image, where its label values are part of the model input image (centre, second image from the top); a red channel image, which is part of the model input image and is fed forward to be merged after generation with the synthetic NIR channel (centre, last image and right, last image); a synthetic NIR channel image, which the model predicted and is merged afterwards with the real red and green channels as a synthetic multispectral image (right, second image from the top).</p> "> Figure 15
<p>The 3D point cloud representation of a forest (source: Montmorency dataset [<a href="#B90-robotics-12-00139" class="html-bibr">90</a>]). The three axes XYZ at the origin of the robot’s coordinate system are represented in red, green, and blue, respectively.</p> "> Figure 16
<p>System architecture from [<a href="#B296-robotics-12-00139" class="html-bibr">296</a>], where multiple UAVs autonomously performed onboard sensing, vehicle state estimation, local mapping, and exploration planning, and a centralised offboard mapping station performs cooperative SLAM, by detecting loop closures and recovering associations observed in multiple submaps, in a forest environment. Reproduced with permission.</p> "> Figure 17
<p>An overview of a robot team operating with the Modular Framework for Distributed Semantic Mapping (MoDSeM) [<a href="#B343-robotics-12-00139" class="html-bibr">343</a>,<a href="#B344-robotics-12-00139" class="html-bibr">344</a>]. Each team member can have its own sensors, perception modules and semantic map. These can be shared arbitrarily with the rest of the team, as needed. Each robot is also able to receive signals and semantic map layers from other robots, which are used as input by perception modules to achieve a unified semantic map.</p> "> Figure 18
<p>AgRob V16 mobile platform and its multisensory system for forestry perception. Reproduced from [<a href="#B350-robotics-12-00139" class="html-bibr">350</a>] with permission.</p> "> Figure 19
<p>Diagram overview of a UAV operating with a perception system developed at Carnegie Mellon’s Robotics Institute, which ultimately creates a dense semantic map to identify flammable materials in a forest environment using a full OctoMap representation [<a href="#B31-robotics-12-00139" class="html-bibr">31</a>].</p> "> Figure 20
<p>An overview of the perceptual pipeline developed by the FRUC group for identifying clusters of flammable material for maintenance using a UGV in forestry environments with a multispectral camera and LiDAR in real-time scenarios [<a href="#B107-robotics-12-00139" class="html-bibr">107</a>].</p> "> Figure 21
<p>SEMFIRE distributed system architecture based on the perceptual pipeline of <a href="#robotics-12-00139-f020" class="html-fig">Figure 20</a> and the Modular Framework for Distributed Semantic Mapping (MoDSeM) of <a href="#robotics-12-00139-f017" class="html-fig">Figure 17</a>. Please refer to [<a href="#B107-robotics-12-00139" class="html-bibr">107</a>,<a href="#B343-robotics-12-00139" class="html-bibr">343</a>,<a href="#B344-robotics-12-00139" class="html-bibr">344</a>,<a href="#B352-robotics-12-00139" class="html-bibr">352</a>] for more details.</p> "> Figure 22
<p>SEMFIRE computational resource architecture [<a href="#B17-robotics-12-00139" class="html-bibr">17</a>].</p> ">
Abstract
:1. Introduction
1.1. Motivations
1.2. Contributions, Methodology, and Document Structure
- We start by enumerating the research groups involved in introducing robots and other autonomous systems in precision forestry and related fields (Section 2).
- Next, we present a review of the sensing technologies used in these applications and discuss their relevance (Section 3).
- We follow this by providing a survey of the algorithms and solutions underpinning artificial perception systems in this context (Section 4).
- We finish by discussing the current scientific and technological landscape, including an analysis of open research questions, and finish by drawing our final conclusions and proposing a tentative roadmap for the future (Section 5).
2. Research Groups Involved in the Research and Development of Robots in Precision Forestry and Agriculture
2.1. Research and Development in Portugal and Spain
2.2. Research and Development throughout the Rest of Europe
2.3. Research and Development throughout the Rest of the World
3. Sensing Technologies
3.1. Cameras and Other Imaging Sensors
- Normalised Difference Vegetation Index (NDVI) [680, 800] nm: to improve chlorophyll detection and measure the general health status of crops, the optimal wavelength varies with the type of plant;
- Red edge NDVI [705, 750] nm: to detect changes, in particular, abrupt reflectance increases at the red/near-infrared border (chlorophyll strongly absorbs wavelengths up to around 700 nm);
- Simple Ratio Index (SRI) [680, 800] nm;
- Photochemical Reflectance Index (PRI) [531, 570] nm;
- Plant Senescence Reflectance Index (PSRI) [520, 680, 800] nm;
- Normalised Phaeophytization Index (NPQI) [415, 435] nm;
- Structural Independent Pigment Index (SIPI) [445, 680, 800] nm;
- Leaf Rust Disease Severity Index (LRDSI) [455, 605] nm: to allow detection of leaf rust.
3.2. Range-Based Sensors
4. Artificial Perception: Approaches, Algorithms, and Systems
- It should allow the robot to navigate through the site while effectively and safely avoiding obstacles under all expected environmental conditions.
- It should equip the robot with the capacity to ensure the safety of both humans and local fauna.
- It must allow the robot to find, select, and act appropriately, with respect to the diverse vegetation encountered in the target site, according to the designated task and the tree species comprising forest production for that site, namely in distinguishing between what should be protected and what should be removed, as defined by the end user.
- Finally, its outcomes should go beyond reproducing a layman’s perspective, and effectively be modulated by the specifics of tasks informed by expert knowledge in forestry operations.
4.1. Forest Scene Analysis and Parsing
- The object of the task, for example, a specific plant and any part of that plant, plants in general of the same species of interest, etc.;
- Any distractors, such as other plants (of other species, or of the same species but not the particular plant to be acted upon, etc.);
- Secondary or ancillary entities to the task, such as humans (co-workers or by-standers), animal wildlife, navigation paths, obstacles to navigation and actuation, geological features (ridges, slopes, and any non-living object), etc.
- Semantic segmentation;
- Volumetric mapping;
- Semantic label projection.
4.1.1. Image Segmentation: Object Detection and Semantic Segmentation
4.1.2. Spatial Representations in 3D and Depth Completion Methods
4.1.3. Metric-Semantic Mapping
4.1.4. Traversability Analysis for Navigation
4.1.5. Datasets and Learning
Name | Type | Sensors | Environment | No Frames/Scans | Labelled |
---|---|---|---|---|---|
FinnDataset [235] | 2D | RGB | Real/Forest | 360 k | — |
ForTrunkDet [237] | 2D | RGB/Thermal | Real/Forest | 3 k | 3 k |
RANUS [234] | 2D | RGB/NIR | Real/Urban | 40 k | 4 k |
Cityscapes [238] | 2.5D | RGB-D | Real/Urban | 25 k | 25 k |
LVFDD [239] | 2.5D | RGB-D/GPS/IMU | Real/Forest | 135 k | – |
Freiburg [156] | 2.5D | RGB-D/NIR | Real/Forest | 15 k | 1 k |
SynthTree43K [240] | 2.5D | RGB-D | Synthetic/Forest | 43 k | 43 k |
SynPhoRest [196] | 2.5D | RGB-D | Synthetic/Forest | 3 k | 3 k |
KITTI [241] | 3D | RGB/LiDAR/IMU | Real/Urban | 216 k/- | 400 |
nuScenes [242] | 3D | RGB-D/LiDAR/IMU | Real/Urban | 1.4 M/390 k | 93 k |
SEMFIRE [243] | 3D | NGR/LiDAR/IMU | Real/Forest | 1.7 k/1.7 k | 1.7 k |
TartanAir [244] | 3D | RGB-D/LiDAR/IMU | Synthetic/Mixed | 1 M/- | 1 M |
QuintaReiFMD [236] | 3D | RGB-D/LiDAR | Real/Forest | 1.5 k/3 k | — |
4.1.6. Improving Learning: Data Augmentation and Transfer Learning
4.2. Localization and Mapping
4.3. Cooperative Perception
4.4. Perception Systems and Architectures
4.5. Computational Resource Management and Real-Time Operation Considerations
- The distributed computing system of the Ranger consists of two main components. The first component is the Sundance VCS-1, which is a PC/104 Linux stack comprising two main parts. The EMC2 board serves as a PCIe/104 OneBank carrier for a Trenz-compatible SoC, and the AMD-Xilinx UltraScale+ ZU4EV Multi-Processor System-On-Chip. The ZU4EV includes a quad-core ARM Cortex-A53 MPCore processor, an ARM Cortex-R5 PSU, an ARM MALI 400 GPU, and PL. The VCS-1 provides 2 GB of onboard DDR4 memory that is shared with the processor system and the programmable logic unit. The second component is an Intel I7-8700 CPU with 16GB of DDR4 memory and an NVIDIA GeForce RTX 600. This distributed system is specifically designed for accelerating state-of-the-art AI algorithms using Vitis-AI, CUDA, and CuDNN, implemented with AI frameworks like TensorFlow, Caffe, and PyTorch. Both computational devices are powered by the Robot Operating System (ROS).
- The Scouts computing system is based on an Intel NUC i7 with 4 GB of DDR memory, also powered by ROS. The Scouts are responsible for processing all sensory data locally using Scout modules. Only a minimal set of localization and post-processed data is exchanged with the Ranger through a wireless connection.
5. Discussion and Conclusions
5.1. Current Scientific and Technological Landscape: Open Questions and Opportunities
5.1.1. Lack of Attention to Precision Forestry
5.1.2. Lack of Available Data
5.1.3. Lack of All-Encompassing Open Software Frameworks for Perception
5.1.4. Lack of Solutions for Robot Swarms and Teams of Robots
5.1.5. Lack of End-User Involvement and Specific Use Case Scenarios
5.1.6. Lack of Computational Resource Planning and Management to Satisfy Real-Time Operation Constraints
- Power consumption and cost;
- The potentially distributed nature of the available computational resources (both in a single, specific robotic platform and between several members in a swarm or heterogeneous team) and its consequences on module deployment and execution.
5.1.7. Lack of Field Testing
5.2. Conclusions
5.2.1. Key Findings
- Challenges in artificial perception: We identified significant challenges in the development of artificial perception systems for precision forestry, including ensuring safe operations (for both the robot and other living beings), enabling multiscale sensing and perception, and addressing specialised, expert-informed tasks. Despite remarkable technical advances, these challenges persist and remain relevant today.
- Importance of precision forestry techniques: Our survey highlights the paramount importance of precision forestry-specific artificial perception techniques, enabling robots to navigate and perform localised precision tasks. While actuation aspects such as flying, locomotion, or manipulation have seen significant progress, perception remains the most challenging component, requiring ongoing attention.
- Robust and integrated perception: Achieving full autonomy in forestry robotics requires the integration of a robust, integrated, and comprehensive set of perceptual functionalities. While progress has been made, fully reliable multipurpose autonomy remains an aspiration, with ongoing discussions about autonomy at task-specific levels, and whether operations with no human in the loop are ever needed, given the safety concerns involved.
- Software frameworks and interoperability: The absence of comprehensive software frameworks specific to perception has led to a fragmented landscape of technological resources. Addressing this issue is essential to advance the field, along with promoting interoperability among existing techniques, which tend to have disparate requirements and outputs.
5.2.2. Tentative Roadmap for the Future
- Advancements in sensing technologies: We have seen a consolidated use of popular sensors in recent decades, and advancements in sensing and edge-processing technologies can be anticipated, which are likely to impact the next generation of artificial perception and decision-making systems for automated forestry.
- Multi-robot systems for cooperative perception: Future research should focus on multi-robot systems, both homogeneous and heterogeneous, to enhance cooperative perception. This approach can help reduce soil damage and improve overall efficiency in forestry operations.
- Addressing rural abandonment: Given the increasing issue of rural abandonment in many developed countries, governments should improve the attractiveness and consider investing in automated solutions to maintain and protect forested areas, mitigating risks, such as wildfires in poorly maintained or abandoned forest areas.
- Clarifying requirements and use cases: Stakeholders should work together to specify clear requirements and use cases that align machines with the tasks at hand, increasing the academic drive and pushing the industry towards the introduction of robots in forestry.
- Benchmarks and standards: The community should collaborate to develop benchmarks and standard methods for measuring success and sharing useful datasets.
- Integrated co-robot-human teams: As we address current challenges, we envision a future where autonomous robotic swarms or multiple robots engage in human–robot interaction (HRI) to assist human co-worker experts, thereby forming an integrated co-robot–human team for joint forestry operations.
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
2D | two dimensions/two-dimensional |
2.5D | two dimensions plus depth |
3D | three dimensions/three-dimensional |
6D | six dimensions/six-dimensional |
BNN | Bayesian neural network |
CNN | convolutional neural network |
CORE | Centre of Operations for Rethinking Engineering |
DNN | deep neural network |
FCN | fully convolutional neural network |
GAN | generative adversarial network |
GDP | gross domestic product |
GIS | geographic information systems |
GNSS | global navigation satellite system |
GPS | global positioning system |
HRI | human–robot interaction |
ICP | iterative closest point |
IMU | inertial measurement unit |
LADAR | laser detection and ranging |
LiDAR | light detection and ranging |
LRDSI | leaf rust disease severity index |
LRF | laser range finder |
MCD | Monte Carlo dropout |
ML | machine learning |
MoDSeM | modular framework for distributed semantic mapping |
NDVI | normalised difference vegetation index |
NIR | near-infrared |
NN | nearest neighbour |
NPQI | normalised phaeophytization index |
NTFP | non-timber forest product |
NTU | Nottingham Trent University |
PCA | principal component analysis |
PRI | photochemical reflectance index |
PSRI | plant senescence reflectance index |
RAISE | robotics and artificial intelligence initiative for a sustainable environment |
RF | random forest |
RGB | red–green–blue |
RGB-D | red–green–blue–depth |
ROS | Robot Operating System |
SAFEFOREST | semi-autonomous robotic system for forest cleaning and fire prevention |
SDG | UN sustainable development goal |
SEMFIRE | safety, exploration, and maintenance of forests with ecological robotics |
SIPI | structural independent pigment index |
SLAM | simultaneous localization and mapping |
SRI | simple ratio index |
SVM | support vector machine |
SWIR | short-wave infrared |
TRL | technological readiness level |
TSDF | truncated signed distance field |
UAV | unmanned aerial vehicle |
UGV | unmanned ground vehicle |
VIS-NIR | visible and near-infrared |
VSWIR | visible-to-short-wave-infrared |
WWUI | wildland and wildland–urban interface |
References
- Agrawal, A.; Cashore, B.; Hardin, R.; Shepherd, G.; Benson, C.; Miller, D. Economic Contributions of Forests. Backgr. Pap. 2013, 1, 1–132. [Google Scholar]
- Vaughan, R.C.; Munsell, J.F.; Chamberlain, J.L. Opportunities for Enhancing Nontimber Forest Products Management in the United States. J. For. 2013, 111, 26–33. [Google Scholar] [CrossRef]
- Hansen, K.; Malmaeus, M. Ecosystem Services in Swedish Forests. Scand. J. For. Res. 2016, 31, 626–640. [Google Scholar] [CrossRef]
- Karsenty, A.; Blanco, C.; Dufour, T. Forests and Climate Change—Instruments Related to the United Nations Framework Convention on Climate Change and Their Potential for Sustainable Forest Management in Africa; Forests and Climate Change Working Paper; FAO: Rome, Italy, 2003; Available online: https://www.fao.org/documents/card/en/c/a2e6e6ef-baee-5922-9bc4-c3b2bf5cdb80/ (accessed on 1 July 2023).
- Ringdahl, O. Automation in Forestry: Development of Unmanned Forwarders. Ph.D. Thesis, Institutionen för Datavetenskap, Umeå Universitet, Umeå, Sweden, 2011. [Google Scholar]
- Silversides, C.R. Broadaxe to Flying Shear: The Mechanization of Forest Harvesting East of the Rockies; Technical Report; National museum of Science and Technology: Ottawa, ON, Canada, 1997; ISBN 0-660-15980-5. [Google Scholar]
- UN. Report of the Open Working Group of the General Assembly on Sustainable Development Goals. 2014. Available online: https://digitallibrary.un.org/record/778970 (accessed on 1 July 2023).
- Guenat, S.; Purnell, P.; Davies, Z.G.; Nawrath, M.; Stringer, L.C.; Babu, G.R.; Balasubramanian, M.; Ballantyne, E.E.F.; Bylappa, B.K.; Chen, B.; et al. Meeting Sustainable Development Goals via Robotics and Autonomous Systems. Nat. Commun. 2022, 13, 3559. [Google Scholar] [CrossRef] [PubMed]
- Choudhry, H.; O’Kelly, G. Precision Forestry: A Revolution in the Woods. 2018. Available online: https://www.mckinsey.com/industries/paper-and-forest-products/our-insights/precision-forestry-a-revolution-in-the-woods (accessed on 1 July 2023).
- San-Miguel-Ayanz, J.; Schulte, E.; Schmuck, G.; Camia, A.; Strobl, P.; Liberta, G.; Giovando, C.; Boca, R.; Sedano, F.; Kempeneers, P.; et al. Comprehensive Monitoring of Wildfires in Europe: The European Forest Fire Information System (EFFIS); IntechOpen: London, UK, 2012. [Google Scholar] [CrossRef]
- Moreira, F.; Pe’er, G. Agricultural Policy Can Reduce Wildfires. Science 2018, 359, 1001. [Google Scholar] [CrossRef] [PubMed]
- Gómez-González, S.; Ojeda, F.; Fernandes, P.M. Portugal and Chile: Longing for Sustainable Forestry While Rising from the Ashes. Environ. Sci. Policy 2018, 81, 104–107. [Google Scholar] [CrossRef]
- Ribeiro, C.; Valente, S.; Coelho, C.; Figueiredo, E. A Look at Forest Fires in Portugal: Technical, Institutional, and Social Perceptions. Scand. J. For. Res. 2015, 30, 317–325. [Google Scholar] [CrossRef]
- Suger, B.; Steder, B.; Burgard, W. Traversability Analysis for Mobile Robots in Outdoor Environments: A Semi-Supervised Learning Approach Based on 3D-lidar Data. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Washington State Convention Center, Seattle, WA, USA, 25–30 May 2015; pp. 3941–3946. [Google Scholar] [CrossRef]
- Siegwart, R.; Lamon, P.; Estier, T.; Lauria, M.; Piguet, R. Innovative Design for Wheeled Locomotion in Rough Terrain. Robot. Auton. Syst. 2002, 40, 151–162. [Google Scholar] [CrossRef]
- Habib, M.K.; Baudoin, Y. Robot-Assisted Risky Intervention, Search, Rescue and Environmental Surveillance. Int. J. Adv. Robot. Syst. 2010, 7, 10. [Google Scholar] [CrossRef]
- Machado, P.; Bonnell, J.; Brandenburgh, S.; Ferreira, J.F.; Portugal, D.; Couceiro, M. Robotics Use Case Scenarios. In Towards Ubiquitous Low-Power Image Processing Platforms; Jahre, M., Göhringer, D., Millet, P., Eds.; Springer: Cham, Switzerland, 2021; pp. 151–172. [Google Scholar] [CrossRef]
- Panzieri, S.; Pascucci, F.; Ulivi, G. An Outdoor Navigation System Using GPS and Inertial Platform. IEEE/ASME Trans. Mech. 2002, 7, 134–142. [Google Scholar] [CrossRef]
- Gougeon, F.A.; Kourtz, P.H.; Strome, M. Preliminary Research on Robotic Vision in a Regenerating Forest Environment. In Proceedings of the International Symposium on Intelligent Robotic Systems, Grenoble, France, 11–15 July 1994; Volume 94, pp. 11–15. Available online: http://cfs.nrcan.gc.ca/publications?id=4582 (accessed on 1 July 2023).
- Thorpe, C.; Durrant-Whyte, H. Field Robots. In Proceedings of the 10th International Symposium of Robotics Research (ISRR’01), Lorne, Australia, 9–12 November 2001. [Google Scholar]
- Kelly, A.; Stentz, A.; Amidi, O.; Bode, M.; Bradley, D.; Diaz-Calderon, A.; Happold, M.; Herman, H.; Mandelbaum, R.; Pilarski, T.; et al. Toward Reliable off Road Autonomous Vehicles Operating in Challenging Environments. Int. J. Robot. Res. 2006, 25, 449–483. [Google Scholar] [CrossRef]
- Lowry, S.; Milford, M.J. Supervised and Unsupervised Linear Learning Techniques for Visual Place Recognition in Changing Environments. IEEE Trans. Robot. 2016, 32, 600–613. [Google Scholar] [CrossRef]
- Aguiar, A.S.; dos Santos, F.N.; Cunha, J.B.; Sobreira, H.; Sousa, A.J. Localization and mapping for robots in agriculture and forestry: A survey. Robotics 2020, 9, 97. [Google Scholar] [CrossRef]
- Oliveira, L.F.P.; Moreira, A.P.; Silva, M.F. Advances in Forest Robotics: A State-of-the-Art Survey. Robotics 2021, 10, 53. [Google Scholar] [CrossRef]
- SEMFIRE. Safety, Exploration and Maintenance of Forests with Ecological Robotics (SEMFIRE, Ref. CENTRO-01-0247-FEDER-03269). 2023. Available online: https://semfire.ingeniarius.pt (accessed on 1 July 2023).
- CORE. Centre of Operations for Rethinking Engineering (CORE, Ref. CENTRO-01-0247-FEDER-037082). 2023. Available online: https://core.ingeniarius.pt (accessed on 1 July 2023).
- Couceiro, M.; Portugal, D.; Ferreira, J.F.; Rocha, R.P. SEMFIRE: Towards a New Generation of Forestry Maintenance Multi-Robot Systems. In Proceedings of the IEEE/SICE International Symposium on System Integration, Sorbone University, Paris, France, 14–16 January 2019. [Google Scholar]
- SAFEFOREST. Semi-Autonomous Robotic System for Forest Cleaning and Fire Prevention (SafeForest, Ref. CENTRO-01-0247-FEDER-045931). 2023. Available online: https://safeforest.ingeniarius.pt (accessed on 1 July 2023).
- Fairfield, N.; Wettergreen, D.; Kantor, G. Segmented SLAM in three-dimensional environments. J. Field Robot. 2010, 27, 85–103. [Google Scholar] [CrossRef]
- Silwal, A.; Parhar, T.; Yandun, F.; Baweja, H.; Kantor, G. A robust illumination-invariant camera system for agricultural applications. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 3292–3298. [Google Scholar]
- Russell, D.J.; Arevalo-Ramirez, T.; Garg, C.; Kuang, W.; Yandun, F.; Wettergreen, D.; Kantor, G. UAV Mapping with Semantic and Traversability Metrics for Forest Fire Mitigation. In Proceedings of the ICRA 2022 Workshop in Innovation in Forestry Robotics: Research and Industry Adoption, Philadelphia, PA, USA, 23–27 May 2022. [Google Scholar]
- Portugal, D.; Andrada, M.E.; Araújo, A.G.; Couceiro, M.S.; Ferreira, J.F. ROS Integration of an Instrumented Bobcat T190 for the SEMFIRE Project. In Robot Operating System (ROS); Springer: Berlin/Heidelberg, Germany, 2021; pp. 87–119. [Google Scholar]
- Reis, R.; dos Santos, F.N.; Santos, L. Forest Robot and Datasets for Biomass Collection. In Proceedings of the Robot 2019: Fourth Iberian Robotics Conference: Advances in Robotics; Springer: Berlin/Heidelberg, Germany, 2020; Volume 1, pp. 152–163. [Google Scholar]
- SCORPION. Scorpion-H2020—Cost Effective Robots for Smart Precision Spraying. 2023. Available online: https://scorpion-h2020.eu/ (accessed on 1 July 2023).
- Aguiar, A.S.; Dos Santos, F.N.; Sobreira, H.; Boaventura-Cunha, J.; Sousa, A.J. Localization and Mapping on Agriculture Based on Point-Feature Extraction and Semiplanes Segmentation From 3D LiDAR Data. Front. Robot. AI 2022, 9, 832165. [Google Scholar] [CrossRef]
- RHEA. Robot Fleets for Highly Effective Agriculture and Forestry Management|Projects|FP7-NMP. 2018. Available online: https://cordis.europa.eu/project/rcn/95055_en.html (accessed on 1 July 2023).
- Emmi, L.; Gonzalez-de-Santos, P. Mobile Robotics in Arable Lands: Current State and Future Trends. In Proceedings of the 2017 European Conference on Mobile Robots (ECMR), Paris, France, 6–8 September 2017; pp. 1–6. [Google Scholar] [CrossRef]
- Gonzalez-de-Santos, P.; Ribeiro, A.; Fernandez-Quintanilla, C.; Lopez-Granados, F.; Brandstoetter, M.; Tomic, S.; Pedrazzi, S.; Peruzzi, A.; Pajares, G.; Kaplanis, G.; et al. Fleets of Robots for Environmentally-Safe Pest Control in Agriculture. Precis. Agric. 2017, 18, 574–614. [Google Scholar] [CrossRef]
- VINEROBOT. VINEyardROBOT|Projects|FP7-ICT. 2018. Available online: https://cordis.europa.eu/project/rcn/111031_en.html (accessed on 1 July 2023).
- Costantini, E.A.C.; Castaldini, M.; Diago, M.P.; Giffard, B.; Lagomarsino, A.; Schroers, H.J.; Priori, S.; Valboa, G.; Agnelli, A.E.; Akça, E.; et al. Effects of Soil Erosion on Agro-Ecosystem Services and Soil Functions: A Multidisciplinary Study in Nineteen Organically Farmed European and Turkish Vineyards. J. Environ. Manag. 2018, 223, 614–624. [Google Scholar] [CrossRef]
- VineScout. News & Gallery|VineScout. 2023. Available online: http://vinescout.eu/web/newsgallery-2 (accessed on 1 July 2023).
- Héder, M. From NASA to EU: The Evolution of the TRL Scale in Public Sector Innovation. Innov. J. 2017, 22, 1–23. Available online: https://innovation.cc/document/2017-22-2-3-from-nasa-to-eu-the-evolution-of-the-trl-scale-in-public-sector-innovation/ (accessed on 1 July 2023).
- Riquelme, M.T.; Barreiro, P.; Ruiz-Altisent, M.; Valero, C. Olive Classification According to External Damage Using Image Analysis. J. Food Eng. 2008, 87, 371–379. [Google Scholar] [CrossRef]
- Valente, J.; Sanz, D.; Barrientos, A.; del Cerro, J.; Ribeiro, Á.; Rossi, C.; Valente, J.; Sanz, D.; Barrientos, A.; del Cerro, J.; et al. An Air-Ground Wireless Sensor Network for Crop Monitoring. Sensors 2011, 11, 6088–6108. [Google Scholar] [CrossRef]
- García-Santillán, I.D.; Pajares, G. On-Line Crop/Weed Discrimination through the Mahalanobis Distance from Images in Maize Fields. Biosyst. Eng. 2018, 166, 28–43. [Google Scholar] [CrossRef]
- CROPS. Intelligent Sensing and Manipulation for Sustainable Production and Harvesting of High Value Crops, Clever Robots for Crops|Projects|FP7-NMP. 2018. Available online: https://cordis.europa.eu/project/rcn/96216_en.html (accessed on 1 July 2023).
- Fernández, R.; Montes, H.; Salinas, C.; Sarria, J.; Armada, M. Combination of RGB and Multispectral Imagery for Discrimination of Cabernet Sauvignon Grapevine Elements. Sensors 2013, 13, 7838–7859. [Google Scholar] [CrossRef]
- VINBOT. Autonomous Cloud-Computing Vineyard Robot to Optimise Yield Management and Wine Quality|Projects|FP7-SME. 2018. Available online: https://cordis.europa.eu/project/rcn/111459_en.html (accessed on 1 July 2023).
- BACCHUS. BACCHUS EU Project. 2023. Available online: https://bacchus-project.eu/ (accessed on 1 July 2023).
- Guzmán, R.; Ariño, J.; Navarro, R.; Lopes, C.M.; Graça, J.; Reyes, M.; Barriguinha, A.; Braga, R. Autonomous Hybrid GPS/Reactive Navigation of an Unmanned Ground Vehicle for Precision Viticulture-VINBOT. In Proceedings of the 62nd German Winegrowers Conference, Stuttgart, Germany, 27–30 November 2016; pp. 1–12. [Google Scholar]
- Vestlund, K.; Hellström, T. Requirements and System Design for a Robot Performing Selective Cleaning in Young Forest Stands. J. Terramech. 2006, 43, 505–525. [Google Scholar] [CrossRef]
- Hellström, T.; Lärkeryd, P.; Nordfjell, T.; Ringdahl, O. Autonomous Forest Vehicles: Historic, Envisioned, and State-of-the-Art. Int. J. For. Eng. 2009, 20, 31–38. [Google Scholar] [CrossRef]
- Hellström, T.; Ostovar, A. Detection of Trees Based on Quality Guided Image Segmentation. In Proceedings of the Second International Conference on Robotics and Associated High-technologies and Equipment for Agriculture and Forestry (RHEA-2014), Madrid, Spain, 21–23 May 2014; pp. 531–540. Available online: https://www.researchgate.net/publication/266556537_Detection_of_Trees_Based_on_Quality_Guided_Image_Segmentation (accessed on 1 July 2023).
- Ostovar, A.; Hellström, T.; Ringdahl, O. Human Detection Based on Infrared Images in Forestry Environments. In Image Analysis and Recognition; Campilho, A., Karray, F., Eds.; Number 9730 in Lecture Notes in Computer Science; Springer International Publishing: Berlin/Heidelberg, Germany, 2016; pp. 175–182. [Google Scholar] [CrossRef]
- Hellström, T.; Ringdahl, O. A Software Framework for Agricultural and Forestry Robotics. In Proceedings of the DIVA; Pisa University Press: Pisa, Italy, 2012; pp. 171–176. Available online: http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-60154 (accessed on 1 July 2023).
- Hera, P.M.L.; Trejo, O.M.; Lindroos, O.; Lideskog, H.; Lindbä, T.; Latif, S.; Li, S.; Karlberg, M. Exploring the Feasibility of Autonomous Forestry Operations: Results from the First Experimental Unmanned Machine. Authorea 2023. [Google Scholar] [CrossRef]
- Sängstuvall, L.; Bergström, D.; Lämås, T.; Nordfjell, T. Simulation of Harvester Productivity in Selective and Boom-Corridor Thinning of Young Forests. Scand. J. For. Res. 2012, 27, 56–73. [Google Scholar] [CrossRef]
- Lindroos, O.; Ringdahl, O.; La Hera, P.; Hohnloser, P.; Hellström, T.H. Estimating the Position of the Harvester Head—A Key Step towards the Precision Forestry of the Future? Croat. J. For. Eng. 2015, 36, 147–164. [Google Scholar]
- SWEEPER. Sweeper Homepage. 2018. Available online: http://www.sweeper-robot.eu/ (accessed on 1 July 2023).
- SAGA. SAGA—Swarm Robotics for Agricultural Applications. 2018. Available online: http://laral.istc.cnr.it/saga/ (accessed on 1 July 2023).
- Bac, C.W.; Hemming, J.; van Henten, E.J. Stem Localization of Sweet-Pepper Plants Using the Support Wire as a Visual Cue. Comput. Electron. Agric. 2014, 105, 111–120. [Google Scholar] [CrossRef]
- Bac, C.W.; Hemming, J.; van Henten, E.J. Robust Pixel-Based Classification of Obstacles for Robotic Harvesting of Sweet-Pepper. Comput. Electron. Agric. 2013, 96, 148–162. [Google Scholar] [CrossRef]
- Geerling, G.W.; Labrador-Garcia, M.; Clevers, J.G.P.W.; Ragas, A.M.J.; Smits, A.J.M. Classification of Floodplain Vegetation by Data Fusion of Spectral (CASI) and LiDAR Data. Int. J. Remote Sens. 2007, 28, 4263–4284. [Google Scholar] [CrossRef]
- Hemming, J.; Rath, T. Computer-Vision-based Weed Identfication under Field Conditions Using Controlled Lighting. J. Agric. Eng. Res. 2001, 78, 233–243. [Google Scholar] [CrossRef]
- Albani, D.; Nardi, D.; Trianni, V. Field Coverage and Weed Mapping by UAV Swarms. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 4319–4325. [Google Scholar] [CrossRef]
- Digiforest. Digiforest. 2023. Available online: https://digiforest.eu (accessed on 1 July 2023).
- Lottes, P.; Stachniss, C. Semi-Supervised Online Visual Crop and Weed Classification in Precision Farming Exploiting Plant Arrangement. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 5155–5161. [Google Scholar] [CrossRef]
- Lottes, P.; Khanna, R.; Pfeifer, J.; Siegwart, R.; Stachniss, C. UAV-based Crop and Weed Classification for Smart Farming. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 3024–3031. [Google Scholar] [CrossRef]
- Milioto, A.; Stachniss, C. Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics Using CNNs. arXiv 2018, arXiv:1802.08960. [Google Scholar]
- Lottes, P.; Behley, J.; Milioto, A.; Stachniss, C. Fully Convolutional Networks with Sequential Information for Robust Crop and Weed Detection in Precision Farming. IEEE Robot. Autom. Lett. 2018, 3, 2870–2877. [Google Scholar] [CrossRef]
- Vieri, M.; Sarri, D.; Rimediotti, M.; Lisci, R.; Peruzzi, A.; Raffaelli, M.; Fontanelli, M.; Frasconi, C.; Martelloni, L. RHEA Project Achievement: An Innovative Spray Concept for Pesticide Application to Tree Crops Equipping a Fleet of Autonomous Robots. In Proceedings of the International Conference of Agricultural Engineering. CIGR-AgEng2012 —Valencia Conference Center, Valencia, Spain, 8–12 July 2012; p. 9. [Google Scholar]
- Oberti, R.; Marchi, M.; Tirelli, P.; Calcante, A.; Iriti, M.; Borghese, A.N. Automatic Detection of Powdery Mildew on Grapevine Leaves by Image Analysis: Optimal View-Angle Range to Increase the Sensitivity. Comput. Electron. Agric. 2014, 104, 1–8. [Google Scholar] [CrossRef]
- Rabatel, G.; Makdessi, N.A.; Ecarnot, M.; Roumet, P. A Spectral Correction Method for Multi-Scattering Effects in Close Range Hyperspectral Imagery of Vegetation Scenes: Application to Nitrogen Content Assessment in Wheat. Adv. Anim. Biosci. 2017, 8, 353–358. [Google Scholar] [CrossRef]
- Lisein, J.; Michez, A.; Claessens, H.; Lejeune, P. Discrimination of Deciduous Tree Species from Time Series of Unmanned Aerial System Imagery. PLoS ONE 2015, 10, e0141006. [Google Scholar] [CrossRef]
- Michez, A.; Piégay, H.; Lisein, J.; Claessens, H.; Lejeune, P. Classification of Riparian Forest Species and Health Condition Using Multi-Temporal and Hyperspatial Imagery from Unmanned Aerial System. Environ. Monit. Assess. 2016, 188, 146. [Google Scholar] [CrossRef]
- Leemans, V.; Marlier, G.; Destain, M.F.; Dumont, B.; Mercatoris, B. Estimation of Leaf Nitrogen Concentration on Winter Wheat by Multispectral Imaging. In Proceedings of the SPIE Commercial + Scientific Sensing and Imaging, Anaheim, CA, USA, 28 April 2017; Bannon, D.P., Ed.; p. 102130. [Google Scholar] [CrossRef]
- Jelavic, E.; Berdou, Y.; Jud, D.; Kerscher, S.; Hutter, M. Terrain-adaptive planning and control of complex motions for walking excavators. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 2684–2691. [Google Scholar]
- Jelavic, E.; Jud, D.; Egli, P.; Hutter, M. Robotic Precision Harvesting: Mapping, Localization, Planning and Control for a Legged Tree Harvester. Field Robot. 2022, 2, 1386–1431. [Google Scholar] [CrossRef]
- THING. THING—SubTerranean Haptic INvestiGator. 2023. Available online: https://thing.put.poznan.pl/ (accessed on 1 July 2023).
- Digumarti, S.T.; Nieto, J.; Cadena, C.; Siegwart, R.; Beardsley, P. Automatic segmentation of tree structure from point cloud data. IEEE Robot. Autom. Lett. 2018, 3, 3043–3050. [Google Scholar] [CrossRef]
- Sa, I.; Chen, Z.; Popović, M.; Khanna, R.; Liebisch, F.; Nieto, J.; Siegwart, R. weedNet: Dense Semantic Weed Classification Using Multispectral Images and MAV for Smart Farming. IEEE Robot. Autom. Lett. 2018, 3, 588–595. [Google Scholar] [CrossRef]
- Johan From, P.; Grimstad, L.; Hanheide, M.; Pearson, S.; Cielniak, G. Rasberry-robotic and autonomous systems for berry production. Mech. Eng. 2018, 140, S14–S18. [Google Scholar] [CrossRef]
- L-CAS. Lincoln Centre for Autonomous Systems Projects. 2012. Available online: https://lcas.lincoln.ac.uk/wp/projects/ (accessed on 1 July 2023).
- L-CAS. Research—Hyperweeding|Harper Adams University. 2023. Available online: https://www.harper-adams.ac.uk/research/project.cfm?id=187 (accessed on 1 July 2023).
- Mozgeris, G.; Jonikavičius, D.; Jovarauskas, D.; Zinkevičius, R.; Petkevičius, S.; Steponavičius, D. Imaging from Manned Ultra-Light and Unmanned Aerial Vehicles for Estimating Properties of Spring Wheat. Precis. Agric. 2018, 19, 876–894. [Google Scholar] [CrossRef]
- Borz, S.A.; Talagai, N.; Cheţa, M.; Montoya, A.G.; Vizuete, D.D.C. Automating Data Collection in Motor-manual Time and Motion Studies Implemented in a Willow Short Rotation Coppice. BioResources 2018, 13, 3236–3249. [Google Scholar] [CrossRef]
- Osterman, A.; Godeša, T.; Hočevar, M.; Širok, B.; Stopar, M. Real-Time Positioning Algorithm for Variable-Geometry Air-Assisted Orchard Sprayer. Comput. Electron. Agric. 2013, 98, 175–182. [Google Scholar] [CrossRef]
- SNOW. Project SNOW • Northern Robotics Laboratory. 2023. Available online: https://norlab.ulaval.ca/research/snow/ (accessed on 1 July 2023).
- Pierzchala, M.; Giguère, P.; Astrup, R. Mapping Forests Using an Unmanned Ground Vehicle with 3D LiDAR and Graph-SLAM. Comput. Electron. Agric. 2018, 145, 217–225. [Google Scholar] [CrossRef]
- Tremblay, J.F.; Béland, M.; Pomerleau, F.; Gagnon, R.; Giguère, P. Automatic 3D Mapping for Tree Diameter Measurements in Inventory Operations. J. Field Robot. 2020, 37, 1328–1346. [Google Scholar] [CrossRef]
- Baril, D.; Deschênes, S.P.; Gamache, O.; Vaidis, M.; LaRocque, D.; Laconte, J.; Kubelka, V.; Giguère, P.; Pomerleau, F. Kilometer-scale autonomous navigation in subarctic forests: Challenges and lessons learned. arXiv 2021, arXiv:2111.13981. [Google Scholar] [CrossRef]
- Zhou, S.; Xi, J.; McDaniel, M.W.; Nishihata, T.; Salesses, P.; Iagnemma, K. Self-Supervised Learning to Visually Detect Terrain Surfaces for Autonomous Robots Operating in Forested Terrain. J. Field Robot. 2012, 29, 277–297. [Google Scholar] [CrossRef]
- McDaniel, M.W.; Nishihata, T.; Brooks, C.A.; Salesses, P.; Iagnemma, K. Terrain Classification and Identification of Tree Stems Using Ground-Based LiDAR. J. Field Robot. 2012, 29, 891–910. [Google Scholar] [CrossRef]
- Guevara, L.; Rocha, R.P.; Cheein, F.A. Improving the manual harvesting operation efficiency by coordinating a fleet of N-trailer vehicles. Comput. Electron. Agric. 2021, 185, 106103. [Google Scholar] [CrossRef]
- Villacrés, J.; Cheein, F.A.A. Construction of 3D maps of vegetation indices retrieved from UAV multispectral imagery in forested areas. Biosyst. Eng. 2022, 213, 76–88. [Google Scholar] [CrossRef]
- Arevalo-Ramirez, T.; Guevara, J.; Rivera, R.G.; Villacrés, J.; Menéndez, O.; Fuentes, A.; Cheein, F.A. Assessment of Multispectral Vegetation Features for Digital Terrain Modeling in Forested Regions. IEEE Trans. Geosci. Remote Sens. 2021, 60, 4405509. [Google Scholar] [CrossRef]
- van Essen, R.; Harel, B.; Kootstra, G.; Edan, Y. Dynamic Viewpoint Selection for Sweet Pepper Maturity Classification Using Online Economic Decisions. Appl. Sci. 2022, 12, 4414. [Google Scholar] [CrossRef]
- Cohen, B.; Edan, Y.; Levi, A.; Alchanatis, V. Early Detection of Grapevine (Vitis vinifera) Downy Mildew (Peronospora) and Diurnal Variations Using Thermal Imaging. Sensors 2022, 22, 3585. [Google Scholar] [CrossRef] [PubMed]
- Windrim, L.; Bryson, M. Forest tree detection and segmentation using high resolution airborne LiDAR. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), The Venetian Macao, Macau, 4–8 November 2019; pp. 3898–3904. [Google Scholar]
- Westling, F.; Underwood, J.; Bryson, M. Graph-based methods for analyzing orchard tree structure using noisy point cloud data. Comput. Electron. Agric. 2021, 187, 106270. [Google Scholar] [CrossRef]
- Windrim, L.; Bryson, M.; McLean, M.; Randle, J.; Stone, C. Automated mapping of woody debris over harvested forest plantations using UAVs, high-resolution imagery, and machine learning. Remote Sens. 2019, 11, 733. [Google Scholar] [CrossRef]
- ROS Agriculture. Robot Agriculture. Available online: https://github.com/ros-agriculture (accessed on 1 July 2023).
- GREENPATROL. Galileo Enhanced Solution for Pest Detection and Control in Greenhouse Fields with Autonomous Service Robots|Projects|H2020. 2018. Available online: https://cordis.europa.eu/project/rcn/212439_en.html (accessed on 1 July 2023).
- Tiozzo Fasiolo, D.; Scalera, L.; Maset, E.; Gasparetto, A. Recent Trends in Mobile Robotics for 3D Mapping in Agriculture. In Proceedings of the Advances in Service and Industrial Robotics; Mechanisms and Machine Science; Müller, A., Brandstötter, M., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 428–435. [Google Scholar] [CrossRef]
- Fountas, S.; Mylonas, N.; Malounas, I.; Rodias, E.; Hellmann Santos, C.; Pekkeriet, E. Agricultural Robotics for Field Operations. Sensors 2020, 20, 2672. [Google Scholar] [CrossRef]
- Ding, H.; Zhang, B.; Zhou, J.; Yan, Y.; Tian, G.; Gu, B. Recent Developments and Applications of Simultaneous Localization and Mapping in Agriculture. J. Field Robot. 2022, 39, 956–983. [Google Scholar] [CrossRef]
- Andrada, M.E.; Ferreira, J.F.; Portugal, D.; Couceiro, M.S. Integration of an Artificial Perception System for Identification of Live Flammable Material in Forestry Robotics. In Proceedings of the 2022 IEEE/SICE International Symposium on System Integration (SII), Online, 9–12 January 2022; pp. 103–108. [Google Scholar]
- Carvalho, A.E.; Ferreira, J.F.; Portugal, D. 3D Traversability Analysis in Forest Environments based on Mechanical Effort. In Proceedings of the 17th International Conference on Intelligent Autonomous Systems (IAS-17), Zagreb, Croatia, 13–16 June 2022; pp. 457–468. [Google Scholar]
- Mendes, J.; Pinho, T.M.; Neves dos Santos, F.; Sousa, J.J.; Peres, E.; Boaventura-Cunha, J.; Cunha, M.; Morais, R. Smartphone Applications Targeting Precision Agriculture Practices—A Systematic Review. Agronomy 2020, 10, 855. [Google Scholar] [CrossRef]
- Oliveira, L.F.; Moreira, A.P.; Silva, M.F. Advances in agriculture robotics: A state-of-the-art review and challenges ahead. Robotics 2021, 10, 52. [Google Scholar] [CrossRef]
- Rovira-Más, F.; Chatterjee, I.; Sáiz-Rubio, V. The Role of GNSS in the Navigation Strategies of Cost-Effective Agricultural Robots. Comput. Electron. Agric. 2015, 112, 172–183. [Google Scholar] [CrossRef]
- Abidi, B.R.; Aragam, N.R.; Yao, Y.; Abidi, M.A. Survey and Analysis of Multimodal Sensor Planning and Integration for Wide Area Surveillance. ACM Comput. Surv. 2009, 41, 1–36. [Google Scholar] [CrossRef]
- Asner, G.P.; Martin, R.E.; Anderson, C.B.; Knapp, D.E. Quantifying Forest Canopy Traits: Imaging Spectroscopy versus Field Survey. Remote Sens. Environ. 2015, 158, 15–27. [Google Scholar] [CrossRef]
- Khanal, S.; Fulton, J.; Shearer, S. An Overview of Current and Potential Applications of Thermal Remote Sensing in Precision Agriculture. Comput. Electron. Agric. 2017, 139, 22–32. [Google Scholar] [CrossRef]
- Lowe, A.; Harrison, N.; French, A.P. Hyperspectral Image Analysis Techniques for the Detection and Classification of the Early Onset of Plant Disease and Stress. Plant Methods 2017, 13, 80. [Google Scholar] [CrossRef] [PubMed]
- Rapaport, T.; Hochberg, U.; Shoshany, M.; Karnieli, A.; Rachmilevitch, S. Combining Leaf Physiology, Hyperspectral Imaging and Partial Least Squares-Regression (PLS-R) for Grapevine Water Status Assessment. ISPRS J. Photogramm. Remote Sens. 2015, 109, 88–97. [Google Scholar] [CrossRef]
- Ristorto, G.; Gallo, R.; Gasparetto, A.; Scalera, L.; Vidoni, R.; Mazzetto, F. A Mobile Laboratory for Orchard Health Status Monitoring in Precision Farming. Chem. Eng. Trans. 2017, 58, 661–666. [Google Scholar] [CrossRef]
- Cubero, S.; Marco-Noales, E.; Aleixos, N.; Barbé, S.; Blasco, J. RobHortic: A Field Robot to Detect Pests and Diseases in Horticultural Crops by Proximal Sensing. Agriculture 2020, 10, 276. [Google Scholar] [CrossRef]
- Clamens, T.; Alexakis, G.; Duverne, R.; Seulin, R.; Fauvet, E.; Fofi, D. Real-Time Multispectral Image Processing and Registration on 3D Point Cloud for Vineyard Analysis. In Proceedings of the 16th International Conference on Computer Vision Theory and Applications, Online, 8–10 February 2021; pp. 388–398. Available online: https://www.scitepress.org/Link.aspx?doi=10.5220/0010266203880398 (accessed on 1 July 2023).
- Halounová, L.; Junek, P.; Petruchová, J. Vegetation Indices–Tools for the Development Evaluation in Reclaimed Areas. In Proceedings of the Global Developments in Environmental Earth Observation from Space: Proceedings of the 25th Annual Symposium of the European Association of Remote Sensing Laboratories; IOS Press Inc.: Porto, Portugal, 2005; p. 339. [Google Scholar]
- Bradley, D.M.; Unnikrishnan, R.; Bagnell, J. Vegetation Detection for Driving in Complex Environments. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Rome, Italy, 10–14 April 2007; pp. 503–508. [Google Scholar] [CrossRef]
- Meyer, G.E. Machine Vision Identification of Plants. In Recent Trends for Enhancing the Diversity and Quality of Soybean Products; Krezhova, D., Ed.; InTech Europe: Rijeka, Croatia, 2011; pp. 401–420. Available online: https://www.intechopen.com/chapters/22613 (accessed on 1 July 2023).
- Symonds, P.; Paap, A.; Alameh, K.; Rowe, J.; Miller, C. A Real-Time Plant Discrimination System Utilising Discrete Reflectance Spectroscopy. Comput. Electron. Agric. 2015, 117, 57–69. [Google Scholar] [CrossRef]
- Noble, S.D.; Brown, R.B. Plant Species Discrimination Using Spectral/Spatial Descriptive Statistics. In Proceedings of the 1st International Workshop on Computer Image Analysis in Agriculture, Potsdam, Germany, 27–28 August 2009; pp. 27–28. [Google Scholar]
- Feyaerts, F.; van Gool, L. Multi-Spectral Vision System for Weed Detection. Pattern Recognit. Lett. 2001, 22, 667–674. [Google Scholar] [CrossRef]
- Di Gennaro, S.F.; Battiston, E.; Di Marco, S.; Facini, O.; Matese, A.; Nocentini, M.; Palliotti, A.; Mugnai, L. Unmanned Aerial Vehicle (UAV)-Based Remote Sensing to Monitor Grapevine Leaf Stripe Disease within a Vineyard Affected by Esca Complex. Phytopathol. Mediterr. 2016, 55, 262–275. [Google Scholar] [CrossRef]
- Hagen, N.A.; Kudenov, M.W. Review of Snapshot Spectral Imaging Technologies. Opt. Eng. 2013, 52, 090901. [Google Scholar] [CrossRef]
- Ross, P.E. Velodyne Unveils Monster Lidar with 128 Laser Beams. 2017. Available online: https://spectrum.ieee.org/cars-that-think/transportation/sensors/velodyne-unveils-monster-lidar-with-128-laser-beams (accessed on 1 July 2023).
- Schwarz, B. Mapping the World in 3D. Nat. Photonics 2010, 4, 429–430. [Google Scholar] [CrossRef]
- Pellenz, J.; Lang, D.; Neuhaus, F.; Paulus, D. Real-Time 3D Mapping of Rough Terrain: A Field Report from Disaster City. In Proceedings of the 2010 IEEE Safety Security and Rescue Robotics, Bremen, Germany, 26–30 July 2010; pp. 1–6. [Google Scholar] [CrossRef]
- Besl, P.; McKay, N.D. A Method for Registration of 3-D Shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
- Durrant-Whyte, H.; Rye, D.; Nebot, E. Localization of autonomous guided vehicles. In Proceedings of the Robotics Research: The Seventh International Symposium, Munich, Germany, 21–24 October 1995; Springer: Berlin/Heidelberg, Germany, 1996; pp. 613–625. [Google Scholar]
- Nüchter, A.; Lingemann, K.; Hertzberg, J.; Surmann, H. 6D SLAM—3D Mapping Outdoor Environments. J. Field Robot. 2007, 24, 699–722. [Google Scholar] [CrossRef]
- Pearson, K. LIII. On lines and planes of closest fit to systems of points in space. Lond. Edinb. Dublin Philos. Mag. J. Sci. 1901, 2, 559–572. [Google Scholar] [CrossRef]
- Neuhaus, F.; Dillenberger, D.; Pellenz, J.; Paulus, D. Terrain Drivability Analysis in 3D Laser Range Data for Autonomous Robot Navigation in Unstructured Environments. In Proceedings of the 2009 IEEE Conference on Emerging Technologies & Factory Automation, Palma de Mallorca, Spain, 22–25 September 2009; pp. 1–4. [Google Scholar] [CrossRef]
- Woods, S. Laser Scanning on the Go. GIM Int. 2016, 29–31. Available online: https://www.gim-international.com/content/article/laser-scanning-on-the-go (accessed on 1 July 2023).
- Wurm, K.M.; Kümmerle, R.; Stachniss, C.; Burgard, W. Improving Robot Navigation in Structured Outdoor Environments by Identifying Vegetation from Laser Data. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 11–15 October 2009; pp. 1217–1222. [Google Scholar] [CrossRef]
- dos Santos, A.A.; Marcato Junior, J.; Araújo, M.S.; Di Martini, D.R.; Tetila, E.C.; Siqueira, H.L.; Aoki, C.; Eltner, A.; Matsubara, E.T.; Pistori, H.; et al. Assessment of CNN-Based Methods for Individual Tree Detection on Images Captured by RGB Cameras Attached to UAVs. Sensors 2019, 19, 3595. [Google Scholar] [CrossRef]
- da Silva, D.Q.; dos Santos, F.N.; Sousa, A.J.; Filipe, V. Visible and Thermal Image-Based Trunk Detection with Deep Learning for Forestry Mobile Robotics. J. Imaging 2021, 7, 176. [Google Scholar] [CrossRef]
- da Silva, D.Q.; dos Santos, F.N.; Filipe, V.; Sousa, A.J.; Oliveira, P.M. Edge AI-Based Tree Trunk Detection for Forestry Monitoring Robotics. Robotics 2022, 11, 136. [Google Scholar] [CrossRef]
- Goeau, H.; Bonnet, P.; Joly, A. Plant Identification Based on Noisy Web Data: The Amazing Performance of Deep Learning (LifeCLEF 2017). In Proceedings of the CLEF 2017—Conference and Labs of the Evaluation Forum, Dublin, Ireland, 11–14 September 2017; pp. 1–13. Available online: https://hal.archives-ouvertes.fr/hal-01629183 (accessed on 1 July 2023).
- Kumar, N.; Belhumeur, P.N.; Biswas, A.; Jacobs, D.W.; Kress, W.J.; Lopez, I.C.; Soares, J.V.B. Leafsnap: A Computer Vision System for Automatic Plant Species Identification. In Computer Vision—ECCV 2012; Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C., Eds.; Number 7573 in Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germnay, 2012; pp. 502–516. [Google Scholar] [CrossRef]
- Affouard, A.; Goëau, H.; Bonnet, P.; Lombardo, J.C.; Joly, A. Pl@ Ntnet App in the Era of Deep Learning. In Proceedings of the ICLR 2017 Workshop Track—5th International Conference on Learning Representations, Toulon, France, 24–26 April 2017. [Google Scholar]
- Goëau, H.; Joly, A.; Yahiaoui, I.; Bakić, V.; Verroust-Blondet, A.; Bonnet, P.; Barthélémy, D.; Boujemaa, N.; Molino, J.F. Plantnet Participation at Lifeclef2014 Plant Identification Task. In Proceedings of the CLEF2014 Working Notes Working Notes for CLEF 2014 Conference CEUR-WS, Sheffield, UK, 15–18 September 2014; pp. 724–737. [Google Scholar]
- Sun, Y.; Liu, Y.; Wang, G.; Zhang, H. Deep Learning for Plant Identification in Natural Environment. Comput. Intell. Neurosci. 2017, 2017, 7361042. [Google Scholar] [CrossRef] [PubMed]
- Borregaard, T.; Nielsen, H.; Nørgaard, L.; Have, H. Crop–Weed Discrimination by Line Imaging Spectroscopy. J. Agric. Eng. Res. 2000, 75, 389–400. [Google Scholar] [CrossRef]
- Piron, A.; Leemans, V.; Kleynen, O.; Lebeau, F.; Destain, M.F. Selection of the Most Efficient Wavelength Bands for Discriminating Weeds from Crop. Comput. Electron. Agric. 2008, 62, 141–148. [Google Scholar] [CrossRef]
- Weiss, U.; Biber, P.; Laible, S.; Bohlmann, K.; Zell, A. Plant Species Classification Using a 3D LIDAR Sensor and Machine Learning. In Proceedings of the IEEE Ninth International Conference on Machine Learning and Applications (ICMLA’10), Washington, DC, USA, 12–14 December 2010; pp. 339–345. [Google Scholar]
- Bradley, D.; Thayer, S.; Stentz, A.; Rander, P. Vegetation Detection for Mobile Robot Navigation. Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, Tech. Rep. CMU-RI-TR-04-12. 2004. Available online: http://www.ri.cmu.edu/pub_files/pub4/bradley_david_2004_2/bradley_david_2004_2.pdf (accessed on 1 July 2023).
- Brunner, A.; Gizachew, B. Rapid Detection of Stand Density, Tree Positions, and Tree Diameter with a 2D Terrestrial Laser Scanner. Eur. J. For. Res. 2014, 133, 819–831. [Google Scholar] [CrossRef]
- Fiel, S.; Sablatnig, R. Automated Identification of Tree Species from Images of the Bark, Leaves and Needles; Technical Report CVL-TR-3; TU Wien, Faculty of Informatics, Computer Vision Lab: Vienna, Austria, 2010; Available online: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.379.1376&rep=rep1&type=pdf#page=67 (accessed on 1 July 2023).
- Dechesne, C.; Mallet, C.; Le Bris, A.; Gouet-Brunet, V. Semantic Segmentation of Forest Stands of Pure Species as a Global Optimization Problem. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 4, 141–148. [Google Scholar] [CrossRef]
- Li, H.; Zhang, X.; Jaeger, M.; Constant, T. Segmentation of Forest Terrain Laser Scan Data. In Proceedings of the 9th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry, Seoul, Republic of Korea, 12–13 December 2010; pp. 47–54. [Google Scholar]
- Cerutti, G.; Tougne, L.; Mille, J.; Vacavant, A.; Coquin, D. Understanding Leaves in Natural Images—A Model-Based Approach for Tree Species Identification. Comput. Vis. Image Underst. 2013, 117, 1482–1501. [Google Scholar] [CrossRef]
- Carpentier, M.; Giguère, P.; Gaudreault, J. Tree Species Identification from Bark Images Using Convolutional Neural Networks. arXiv 2018, arXiv:1803.00949. [Google Scholar]
- Valada, A.; Mohan, R.; Burgard, W. Self-Supervised Model Adaptation for Multimodal Semantic Segmentation. Int. J. Comput. Vis. 2020, 128, 1239–1285. [Google Scholar] [CrossRef]
- Andrada, M.E.; Ferreira, J.; Portugal, D.; Couceiro, M. Testing Different CNN Architectures for Semantic Segmentation for Landscaping with Forestry Robotics. In Proceedings of the Workshop on Perception, Planning and Mobility in Forestry Robotics, Virtual Workshop, 29 October 2020. [Google Scholar]
- Fortin, J.M.; Gamache, O.; Grondin, V.; Pomerleau, F.; Giguère, P. Instance Segmentation for Autonomous Log Grasping in Forestry Operations. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 6064–6071. [Google Scholar] [CrossRef]
- Li, H.; Liu, J.; Wang, D. A Fast Instance Segmentation Technique for Log End Faces Based on Metric Learning. Forests 2023, 14, 795. [Google Scholar] [CrossRef]
- Grondin, V.; Fortin, J.M.; Pomerleau, F.; Giguère, P. Tree Detection and Diameter Estimation Based on Deep Learning. For. Int. J. For. Res. 2023, 96, 264–276. [Google Scholar] [CrossRef]
- Teng, C.H.; Chen, Y.S.; Hsu, W.H. Tree Segmentation from an Image. In Proceedings of the 9th IAPR Conference on Machine Vision Applications (MVA), Tsukuba Science City, Japan, 16–18 May 2005; pp. 59–63. [Google Scholar]
- Sodhi, P.; Vijayarangan, S.; Wettergreen, D. In-Field Segmentation and Identification of Plant Structures Using 3D Imaging. In Proceedings of the Intelligent Robots and Systems (IROS), 2017 IEEE/RSJ International Conference, Vancouver, BC, Canada, 24–28 September 2017; pp. 5180–5187. [Google Scholar]
- Barth, R.; Hemming, J.; van Henten, E.J. Improved Part Segmentation Performance by Optimising Realism of Synthetic Images Using Cycle Generative Adversarial Networks. arXiv 2018, arXiv:1803.06301. [Google Scholar]
- Anantrasirichai, N.; Hannuna, S.; Canagarajah, N. Automatic Leaf Extraction from Outdoor Images. arXiv 2017, arXiv:1709.06437. [Google Scholar]
- Dechesne, C.; Lassalle, P.; Lefèvre, S. Bayesian U-Net: Estimating Uncertainty in Semantic Segmentation of Earth Observation Images. Remote Sens. 2021, 13, 3836. [Google Scholar] [CrossRef]
- Mukhoti, J.; Gal, Y. Evaluating Bayesian Deep Learning Methods for Semantic Segmentation. arXiv 2018, arXiv:1811.12709. [Google Scholar]
- Kendall, A.; Badrinarayanan, V.; Cipolla, R. Bayesian Segnet: Model Uncertainty in Deep Convolutional Encoder-Decoder Architectures for Scene Understanding. arXiv 2015, arXiv:1511.02680. [Google Scholar]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
- Lo, T.W.; Siebert, J. Local feature extraction and matching on range images: 2.5D SIFT. Comput. Vis. Image Underst. 2009, 113, 1235–1250. [Google Scholar] [CrossRef]
- Knopp, J.; Prasad, M.; Willems, G.; Timofte, R.; Van Gool, L. Hough Transform and 3D SURF for Robust Three Dimensional Classification. In Proceedings of the Computer Vision—ECCV 2010, Heraklion, Crete, 5–11 September 2010; Daniilidis, K., Maragos, P., Paragios, N., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 589–602. [Google Scholar] [CrossRef]
- Aubry, M.; Schlickewei, U.; Cremers, D. The wave kernel signature: A quantum mechanical approach to shape analysis. In Proceedings of the 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Barcelona, Spain, 6–13 November 2011; IEEE Computer Society: Los Alamitos, CA, USA, 2011; pp. 1626–1633. [Google Scholar] [CrossRef]
- Ghrabat, M.; Ma, G.; Maolood, I.; Alresheedi, S.; Abduljabbar, Z. An effective image retrieval based on optimized genetic algorithm utilized a novel SVM-based convolutional neural network classifier. Hum.-Centric Comput. Inf. Sci. 2019, 9, 31. [Google Scholar] [CrossRef]
- Hänsch, R.; Weber, T.; Hellwich, O. Comparison of 3D interest point detectors and descriptors for point cloud fusion. Isprs Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, II-3, 57–64. [Google Scholar] [CrossRef]
- Li, B.; Zhang, T.; Xia, T. Vehicle Detection from 3D Lidar Using Fully Convolutional Network. In Proceedings of the Robotics: Science and Systems XII; Hsu, D., Amato, N.M., Berman, S., Jacobs, S.A., Eds.; University of Michigan: Ann Arbor, MI, USA, 2016. [Google Scholar] [CrossRef]
- Graham, B.; Engelcke, M.; Maaten, L.v.d. 3D Semantic Segmentation with Submanifold Sparse Convolutional Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 9224–9232. [Google Scholar] [CrossRef]
- Qi, C.R.; Su, H.; NieBner, M.; Dai, A.; Yan, M.; Guibas, L.J. Volumetric and Multi-view CNNs for Object Classification on 3D Data. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; IEEE Computer Society: Los Alamitos, CA, USA, 2016; pp. 5648–5656. [Google Scholar] [CrossRef]
- Ben-Shabat, Y.; Lindenbaum, M.; Fischer, A. 3DmFV: Three-Dimensional Point Cloud Classification in Real-Time Using Convolutional Neural Networks. IEEE Robot. Autom. Lett. 2018, 3, 3145–3152. [Google Scholar] [CrossRef]
- Song, W.; Zhang, L.; Tian, Y.; Fong, S.; Liu, J.; Gozho, A. CNN-based 3D Object Classification Using Hough Space of LiDAR Point Clouds. Hum.-Centric Comput. Inf. Sci. 2020, 10, 19. [Google Scholar] [CrossRef]
- Milioto, A.; Vizzo, I.; Behley, J.; Stachniss, C. RangeNet ++: Fast and Accurate LiDAR Semantic Segmentation. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), The Venetian Macao, Macau, 4–8 November 2019; pp. 4213–4220. [Google Scholar] [CrossRef]
- Charles, R.; Su, H.; Kaichun, M.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 22–25 July 2017; IEEE Computer Society: Los Alamitos, CA, USA, 2017; pp. 77–85. [Google Scholar] [CrossRef]
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. In Proceedings of the 31st International Conference on Neural Information Processing Systems NIPS’17, Long Beach, CA, USA, 4–9 December 2017; Curran Associates Inc.: Red Hook, NY, USA, 2017; pp. 5105–5114. Available online: https://proceedings.neurips.cc/paper_files/paper/2017/file/d8bf84be3800d12f74d8b05e9b89836f-Paper.pdf (accessed on 1 July 2023).
- Hu, Q.; Yang, B.; Xie, L.; Rosa, S.; Guo, Y.; Wang, Z.; Trigoni, N.; Markham, A. RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Online, 14–19 June 2020; pp. 11105–11114. [Google Scholar] [CrossRef]
- Li, Y.; Bu, R.; Sun, M.; Wu, W.; Di, X.; Chen, B. PointCNN: Convolution on x-Transformed Points. In Proceedings of the 32nd International Conference on Neural Information Processing Systems NIPS’18, Montreal, QC, Canada, 2–8 December 2018; Curran Associates Inc.: Red Hook, NY, USA, 2018; pp. 828–838. Available online: https://proceedings.neurips.cc/paper_files/paper/2018/file/f5f8590cd58a54e94377e6ae2eded4d9-Paper.pdf (accessed on 1 July 2023).
- Groh, F.; Wieschollek, P.; Lensch, H.P.A. Flex-Convolution (million-scale point-cloud learning beyond grid-worlds). In Proceedings of the Computer Vision—ACCV 2018, Perth, Australia, 2–6 December 2018; Jawahar, C.V., Li, H., Mori, G., Schindler, K., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 105–122. [Google Scholar] [CrossRef]
- Dovrat, O.; Lang, I.; Avidan, S. Learning to Sample. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; IEEE Computer Society: Los Alamitos, CA, USA, 2019; pp. 2755–2764. [Google Scholar] [CrossRef]
- Yang, J.; Zhang, Q.; Ni, B.; Li, L.; Liu, J.; Zhou, M.; Tian, Q. Modeling Point Clouds With Self-Attention and Gumbel Subset Sampling. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 3318–3327. [Google Scholar] [CrossRef]
- Sutton, R.S.; McAllester, D.; Singh, S.; Mansour, Y. Policy Gradient Methods for Reinforcement Learning with Function Approximation. In Proceedings of the Advances in Neural Information Processing Systems 1999, Denver, CO, USA, 29 November–4 December 1999; Solla, S., Leen, T., Müller, K., Eds.; MIT Press: Cambridge, MA, USA; Volume 12. [Google Scholar]
- Mukhandi, H.; Ferreira, J.F.; Peixoto, P. Systematic Sampling of Large-Scale LiDAR Point Clouds for Semantic Segmentation in Forestry Robotics. Electr. Electron. Eng. 2023; preprints. [Google Scholar]
- Bello, S.A.; Yu, S.; Wang, C.; Adam, J.M.; Li, J. Review: Deep Learning on 3D Point Clouds. Remote Sens. 2020, 12, 1729. [Google Scholar] [CrossRef]
- Niu, C.; Zauner, K.P.; Tarapore, D. An Embarrassingly Simple Approach for Visual Navigation of Forest Environments. Front. Robot. AI 2023, 10. [Google Scholar] [CrossRef]
- Xie, D.; Chen, L.; Liu, L.; Chen, L.; Wang, H. Actuators and Sensors for Application in Agricultural Robots: A Review. Machines 2022, 10, 913. [Google Scholar] [CrossRef]
- Ku, J.; Harakeh, A.; Waslander, S.L. In Defense of Classical Image Processing: Fast Depth Completion on the CPU. In Proceedings of the 2018 15th Conference on Computer and Robot Vision (CRV), Toronto, ON, Canada, 8–10 May 2018; pp. 16–22. [Google Scholar]
- Zhao, Y.; Bai, L.; Zhang, Z.; Huang, X. A Surface Geometry Model for LiDAR Depth Completion. IEEE Robot. Autom. Lett. 2021, 6, 4457–4464. [Google Scholar] [CrossRef]
- Xie, Z.; Yu, X.; Gao, X.; Li, K.; Shen, S. Recent Advances in Conventional and Deep Learning-Based Depth Completion: A Survey. IEEE Trans. Neural Netw. Learn. Syst. 2022, 1–12. [Google Scholar] [CrossRef]
- Hu, M.; Wang, S.; Li, B.; Ning, S.; Fan, L.; Gong, X. Towards Precise and Efficient Image Guided Depth Completion. In Proceedings of the 2021 International Conference on Robotics and Automation (ICRA 2021), Xi’an, China, 30 May–5 June 2021. [Google Scholar]
- Nunes, R.; Ferreira, J.; Peixoto, P. SynPhoRest—Synthetic Photorealistic Forest Dataset with Depth Information for Machine Learning Model Training. 2022. Available online: https://doi.org/10.5281/zenodo.6369445 (accessed on 1 July 2023).
- Lin, M.; Cao, L.; Zhang, Y.; Shao, L.; Lin, C.W.; Ji, R. Pruning Networks with Cross-Layer Ranking & k-Reciprocal Nearest Filters. IEEE Trans. Neural Netw. Learn. Syst. 2022, 1–10. [Google Scholar] [CrossRef]
- Yan, Z.; Wang, K.; Li, X.; Zhang, Z.; Li, J.; Yang, J. RigNet: Repetitive Image Guided Network for Depth Completion. arXiv 2021, arXiv:2107.13802. [Google Scholar] [CrossRef]
- Wong, A.; Cicek, S.; Soatto, S. Learning Topology from Synthetic Data for Unsupervised Depth Completion. IEEE Robot. Autom. Lett. 2021, 6, 1495–1502. [Google Scholar] [CrossRef]
- Eldesokey, A.; Felsberg, M.; Holmquist, K.; Persson, M. Uncertainty-Aware CNNs for Depth Completion: Uncertainty from Beginning to End. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2020), Online, 14–19 June 2020; pp. 12014–12023. [Google Scholar]
- KITTI. The KITTI Vision Benchmark Suite. 2023. Available online: https://www.cvlibs.net/datasets/kitti/eval_depth.php?benchmark=depth_completion (accessed on 1 July 2023).
- Liu, X.; Nardari, G.V.; Ojeda, F.C.; Tao, Y.; Zhou, A.; Donnelly, T.; Qu, C.; Chen, S.W.; Romero, R.A.; Taylor, C.J.; et al. Large-Scale Autonomous Flight with Real-Time Semantic Slam under Dense Forest Canopy. IEEE Robot. Autom. Lett. 2022, 7, 5512–5519. [Google Scholar] [CrossRef]
- Andrada, M.E.; Ferreira, J.F.; Kantor, G.; Portugal, D.; Antunes, C.H. Model Pruning in Depth Completion CNNs for Forestry Robotics with Simulated Annealing. In Proceedings of the ICRA 2022 Workshop in Innovation in Forestry Robotics: Research and Industry Adoption, Philadelphia, PA, USA, 23 May 2022. [Google Scholar]
- Han, X.; Li, S.; Wang, X.; Zhou, W. Semantic Mapping for Mobile Robots in Indoor Scenes: A Survey. Information 2021, 12, 92. [Google Scholar] [CrossRef]
- Yang, Z.; Liu, C. TUPPer-Map: Temporal and Unified Panoptic Perception for 3D Metric-Semantic Mapping. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 1094–1101. [Google Scholar]
- Chang, Y.; Tian, Y.; How, J.P.; Carlone, L. Kimera-Multi: A System for Distributed Multi-Robot Metric-Semantic Simultaneous Localization and Mapping. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 11210–11218. [Google Scholar] [CrossRef]
- Li, L.; Kong, X.; Zhao, X.; Huang, T.; Liu, Y. Semantic Scan Context: A Novel Semantic-Based Loop-Closure Method for LiDAR SLAM. Auton. Robots 2022, 46, 535–551. [Google Scholar] [CrossRef]
- Gan, L.; Kim, Y.; Grizzle, J.W.; Walls, J.M.; Kim, A.; Eustice, R.M.; Ghaffari, M. Multitask Learning for Scalable and Dense Multilayer Bayesian Map Inference. IEEE Trans. Robot. 2022, 39, 699–717. [Google Scholar] [CrossRef]
- Liu, J.; Jung, C. NNNet: New Normal Guided Depth Completion from Sparse LiDAR Data and Single Color Image. IEEE Access 2022, 10, 114252–114261. [Google Scholar] [CrossRef]
- Hornung, A.; Wurm, K.M.; Bennewitz, M.; Stachniss, C.; Burgard, W. OctoMap: An Efficient Probabilistic 3D Mapping Framework Based on Octrees. Auton. Robot. 2013, 34, 189–206. [Google Scholar] [CrossRef]
- Doherty, K.; Shan, T.; Wang, J.; Englot, B. Learning-Aided 3-D Occupancy Mapping With Bayesian Generalized Kernel Inference. IEEE Trans. Robot. 2019, 35, 953–966. [Google Scholar] [CrossRef]
- Borges, P.; Peynot, T.; Liang, S.; Arain, B.; Wildie, M.; Minareci, M.; Lichman, S.; Samvedi, G.; Sa, I.; Hudson, N.; et al. A Survey on Terrain Traversability Analysis for Autonomous Ground Vehicles: Methods, Sensors, and Challenges. Field Robot. 2022, 2, 1567–1627. [Google Scholar] [CrossRef]
- Wu, H.; Liu, B.; Su, W.; Chen, Z.; Zhang, W.; Ren, X.; Sun, J. Optimum pipeline for visual terrain classification using improved bag of visual words and fusion methods. J. Sens. 2017, 2017, 8513949. [Google Scholar] [CrossRef]
- Palazzo, S.; Guastella, D.C.; Cantelli, L.; Spadaro, P.; Rundo, F.; Muscato, G.; Giordano, D.; Spampinato, C. Domain adaptation for outdoor robot traversability estimation from RGB data with safety-preserving Loss. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 10014–10021. [Google Scholar]
- Reina, G.; Leanza, A.; Milella, A.; Messina, A. Mind the ground: A power spectral density-based estimator for all-terrain rovers. Measurement 2020, 151, 107136. [Google Scholar] [CrossRef]
- Goodin, C.; Dabbiru, L.; Hudson, C.; Mason, G.; Carruth, D.; Doude, M. Fast terrain traversability estimation with terrestrial lidar in off-road autonomous navigation. In Proceedings of the SPIE Unmanned Systems Technology XXIII, Online, 12–16 April 2021; Volume 11758, pp. 189–199. [Google Scholar]
- Rankin, A.L.; Matthies, L.H. Passive sensor evaluation for unmanned ground vehicle mud detection. J. Field Robot. 2010, 27, 473–490. [Google Scholar] [CrossRef]
- Ahtiainen, J.; Peynot, T.; Saarinen, J.; Scheding, S.; Visala, A. Learned ultra-wideband RADAR sensor model for augmented LIDAR-based traversability mapping in vegetated environments. In Proceedings of the 18th International Conference on Information Fusion (Fusion 2015), Washington, DC, USA, 6–9 July 2015; pp. 953–960. [Google Scholar]
- Winkens, C.; Sattler, F.; Paulus, D. Hyperspectral Terrain Classification for Ground Vehicles. In Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP—5: VISAPP), Porto, Portugal, 27 February–1 March 2017; pp. 417–424. [Google Scholar]
- Milella, A.; Reina, G.; Nielsen, M. A multi-sensor robotic platform for ground mapping and estimation beyond the visible spectrum. Precis. Agric. 2019, 20, 423–444. [Google Scholar] [CrossRef]
- Vulpi, F.; Milella, A.; Marani, R.; Reina, G. Recurrent and convolutional neural networks for deep terrain classification by autonomous robots. J. Terramech. 2021, 96, 119–131. [Google Scholar] [CrossRef]
- Usui, K. Data augmentation using image-to-image translation for detecting forest strip roads based on deep learning. Int. J. For. Eng. 2021, 32, 57–66. [Google Scholar] [CrossRef]
- Tai, L.; Li, S.; Liu, M. A deep-network solution towards model-less obstacle avoidance. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Republic of Korea, 9–14 October 2016; pp. 2759–2764. [Google Scholar]
- Giusti, A.; Guzzi, J.; Cireşan, D.C.; He, F.L.; Rodríguez, J.P.; Fontana, F.; Faessler, M.; Forster, C.; Schmidhuber, J.; Caro, G.D.; et al. A Machine Learning Approach to Visual Perception of Forest Trails for Mobile Robots. IEEE Robot. Autom. Lett. 2016, 1, 661–667. [Google Scholar] [CrossRef]
- Sihvo, S.; Virjonen, P.; Nevalainen, P.; Heikkonen, J. Tree detection around forest harvester based on onboard LiDAR measurements. In Proceedings of the 2018 Baltic Geodetic Congress (BGC Geomatics), Olsztyn, Poland, 21–23 June 2018; pp. 364–367. [Google Scholar]
- Liu, M.; Han, Z.; Chen, Y.; Liu, Z.; Han, Y. Tree species classification of LiDAR data based on 3D deep learning. Measurement 2021, 177, 109301. [Google Scholar] [CrossRef]
- Wang, C.; Wang, J.; Li, C.; Ho, D.; Cheng, J.; Yan, T.; Meng, L.; Meng, M.Q.H. Safe and robust mobile robot navigation in uneven indoor environments. Sensors 2019, 19, 2993. [Google Scholar] [CrossRef]
- Yang, S.; Yang, S.; Yi, X. An efficient spatial representation for path planning of ground robots in 3D environments. IEEE Access 2018, 6, 41539–41550. [Google Scholar] [CrossRef]
- Fankhauser, P.; Bloesch, M.; Hutter, M. Probabilistic terrain mapping for mobile robots with uncertain localization. IEEE Robot. Autom. Lett. 2018, 3, 3019–3026. [Google Scholar] [CrossRef]
- Ruetz, F.; Hernández, E.; Pfeiffer, M.; Oleynikova, H.; Cox, M.; Lowe, T.; Borges, P. Ovpc mesh: 3d free-space representation for local ground vehicle navigation. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 8648–8654. [Google Scholar]
- Krüsi, P.; Furgale, P.; Bosse, M.; Siegwart, R. Driving on point clouds: Motion planning, trajectory optimization, and terrain assessment in generic nonplanar environments. J. Field Robot. 2017, 34, 940–984. [Google Scholar] [CrossRef]
- Ramachandram, D.; Taylor, G.W. Deep Multimodal Learning: A Survey on Recent Advances and Trends. IEEE Signal Process. Mag. 2017, 34, 96–108. [Google Scholar] [CrossRef]
- Bao, Y.; Song, K.; Wang, J.; Huang, L.; Dong, H.; Yan, Y. Visible and Thermal Images Fusion Architecture for Few-Shot Semantic Segmentation. J. Vis. Commun. Image Represent. 2021, 80, 103306. [Google Scholar] [CrossRef]
- Choe, G.; Kim, S.H.; Im, S.; Lee, J.Y.; Narasimhan, S.G.; Kweon, I.S. RANUS: RGB and NIR Urban Scene Dataset for Deep Scene Parsing. IEEE Robot. Autom. Lett. 2018, 3, 1808–1815. [Google Scholar] [CrossRef]
- Ali, I.; Durmush, A.; Suominen, O.; Yli-Hietanen, J.; Peltonen, S.; Collin, J.; Gotchev, A. FinnForest Dataset: A Forest Landscape for Visual SLAM. Robot. Auton. Syst. 2020, 132, 103610. [Google Scholar] [CrossRef]
- da Silva, D.Q.; dos Santos, F.N.; Santos, L.; Aguiar, A. QuintaReiFMD - ROS1.0 Bag Dataset Acquired with AgRob V16 in Portuguese Forest. 2021. Available online: https://doi.org/10.5281/zenodo.5045355 (accessed on 1 July 2023).
- da Silva, D.Q.; dos Santos, F.N. ForTrunkDet—Forest Dataset of Visible and Thermal Annotated Images for Object Detection. 2021. Available online: https://doi.org/10.5281/zenodo.5213825 (accessed on 1 July 2023).
- Cordts, M.; Omran, M.; Ramos, S.; Scharwächter, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; Schiele, B. The Cityscapes Dataset. In Proceedings of the CVPR Workshop on the Future of Datasets in Vision, Boston, MA, USA, 8–10 June 2015; Volume 2. [Google Scholar]
- Niu, C.; Tarapore, D.; Zauner, K.P. Low-viewpoint forest depth dataset for sparse rover swarms. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 8035–8040. [Google Scholar]
- Grondin, V.; Pomerleau, F.; Giguère, P. Training Deep Learning Algorithms on Synthetic Forest Images for Tree Detection. In Proceedings of the ICRA 2022 Workshop in Innovation in Forestry Robotics: Research and Industry Adoption, Philadelphia, PA, USA, 23 May 2022; Available online: https://openreview.net/forum?id=SxWgxLtyW7c (accessed on 1 July 2023).
- Uhrig, J.; Schneider, N.; Schneider, L.; Franke, U.; Brox, T.; Geiger, A. Sparsity Invariant Cnns. In Proceedings of the 2017 International Conference on 3D Vision (3DV), Qingdao, China, 10–12 October 2017; pp. 11–20. [Google Scholar]
- Caesar, H.; Bankiti, V.; Lang, A.H.; Vora, S.; Liong, V.E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; Beijbom, O. nuScenes: A Multimodal Dataset for Autonomous Driving. arXiv 2019, arXiv:1903.11027. [Google Scholar]
- Bittner, D.; Andrada, M.E.; Portugal, D.; Ferreira, J.F. SEMFIRE Forest Dataset for Semantic Segmentation and Data Augmentation. 2021. Available online: https://doi.org/10.5281/ZENODO.5819064 (accessed on 1 July 2023).
- Wang, W.; Zhu, D.; Wang, X.; Hu, Y.; Qiu, Y.; Wang, C.; Hu, Y.; Kapoor, A.; Scherer, S. Tartanair: A Dataset to Push the Limits of Visual Slam. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, Las Vegas, NV, USA, 25–29 October 2020; pp. 4909–4916. [Google Scholar]
- Ros, G.; Sellart, L.; Materzynska, J.; Vazquez, D.; Lopez, A.M. The Synthia Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 3234–3243. [Google Scholar]
- Nunes, R.; Ferreira, J.; Peixoto, P. Procedural Generation of Synthetic Forest Environments to Train Machine Learning Algorithms. In Proceedings of the ICRA 2022 Workshop in Innovation in Forestry Robotics: Research and Industry Adoption, Philadelphia, PA, USA, 23 May 2022; Available online: https://irep.ntu.ac.uk/id/eprint/46417/ (accessed on 1 July 2023).
- Kesten, R.; Usman, M.; Houston, J.; Pandya, T.; Nadhamuni, K.; Ferreira, A.; Yuan, M.; Low, B.; Jain, A.; Ondruska, P.; et al. Level 5 Perception Dataset 2020. 2019. Available online: https://apera.io/a/tech/561428/lyft-level-5-dataset#:~:text=The%20Lyft%20Level%205%20Dataset,(including%20lanes%20and%20crosswalks) (accessed on 1 July 2023).
- Bittner, D. Data Augmentation Solutions for CNN-Based Semantic Segmentation in Forestry Applications. Bachelor’s Thesis, Regensburg University of Applied Sciences (OTH), Regensburg, Germany, 2022. [Google Scholar]
- Shorten, C.; Khoshgoftaar, T.M. A Survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
- Bird, J.J.; Faria, D.R.; Ekárt, A.; Ayrosa, P.P.S. From Simulation to Reality: CNN Transfer Learning for Scene Classification. In Proceedings of the 2020 IEEE 10th International Conference on Intelligent Systems (IS), Varna, Bulgaria, 28–30 August 2020; pp. 619–625. [Google Scholar] [CrossRef]
- Bittner, D.; Ferreira, J.F.; Andrada, M.E.; Bird, J.J.; Portugal, D. Generating Synthetic Multispectral Images for Semantic Segmentation in Forestry Applications. In Proceedings of the ICRA 2022 Workshop in Innovation in Forestry Robotics: Research and Industry Adoption, Philadelphia, PA, USA, 23 May 2022; Available online: https://irep.ntu.ac.uk/id/eprint/46416/ (accessed on 1 July 2023).
- Gao, Y.; Mosalam, K.M. Deep Transfer Learning for Image-Based Structural Damage Recognition. Comput.-Aided Civ. Infrastruct. Eng. 2018, 33, 748–768. [Google Scholar] [CrossRef]
- Hussain, M.; Bird, J.J.; Faria, D.R. A Study on CNN Transfer Learning for Image Classification. In Advances in Computational Intelligence Systems; Advances in Intelligent Systems and Computing; Lotfi, A., Bouchachia, H., Gegov, A., Langensiepen, C., McGinnity, M., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 191–202. [Google Scholar] [CrossRef]
- Johnson, J.M.; Khoshgoftaar, T.M. Survey on Deep Learning with Class Imbalance. J. Big Data 2019, 6, 27. [Google Scholar] [CrossRef]
- Liu, Y.; Yang, G.; Qiao, S.; Liu, M.; Qu, L.; Han, N.; Wu, T.; Yuan, G.; Wu, T.; Peng, Y. Imbalanced Data Classification: Using Transfer Learning and Active Sampling. Eng. Appl. Artif. Intell. 2023, 117, 105621. [Google Scholar] [CrossRef]
- Younes, G.; Asmar, D.; Shammas, E.; Zelek, J. Keyframe-Based Monocular SLAM: Design, Survey, and Future Directions. Robot. Auton. Syst. 2017, 98, 67–88. [Google Scholar] [CrossRef]
- Scaramuzza, D.; Fraundorfer, F. Visual Odometry [Tutorial]. IEEE Robot. Autom. Mag. 2011, 18, 80–92. [Google Scholar] [CrossRef]
- Chahine, G.; Pradalier, C. Survey of Monocular SLAM Algorithms in Natural Environments. In Proceedings of the 15th Conference on Computer and Robot Vision (CRV), Toronto, ON, Canada, 8–10 May 2018 2018; pp. 345–352. [Google Scholar] [CrossRef]
- Konolige, K.; Agrawal, M.; Sola, J. Large-Scale Visual Odometry for Rough Terrain. In Robotics Research; Springer: Berlin/Heidelberg, Germany, 2010; pp. 201–212. [Google Scholar]
- Otsu, K.; Otsuki, M.; Ishigami, G.; Kubota, T. Terrain Adaptive Detector Selection for Visual Odometry in Natural Scenes. Adv. Robot. 2013, 27, 1465–1476. [Google Scholar] [CrossRef]
- Daftry, S.; Dey, D.; Sandhawalia, H.; Zeng, S.; Bagnell, J.A.; Hebert, M. Semi-Dense Visual Odometry for Monocular Navigation in Cluttered Environment. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2015), Seattle, WA, USA, 25–30 May 2015. [Google Scholar]
- Peretroukhin, V.; Clement, L.; Kelly, J. Reducing Drift in Visual Odometry by Inferring Sun Direction Using a Bayesian Convolutional Neural Network. In Proceedings of the Robotics and Automation (ICRA), 2017 IEEE International Conference, Marina Bay Sands, Singapore, 29 May–3 June 2017; pp. 2035–2042. [Google Scholar]
- Giancola, S.; Schneider, J.; Wonka, P.; Ghanem, B.S. Integration of Absolute Orientation Measurements in the KinectFusion Reconstruction Pipeline. arXiv 2018, arXiv:1802.03980. [Google Scholar]
- Paudel, D.P.; Demonceaux, C.; Habed, A.; Vasseur, P. 2D–3D Synchronous/Asynchronous Camera Fusion for Visual Odometry. Auton. Robots 2018, 43, 21–35. [Google Scholar] [CrossRef]
- Smolyanskiy, N.; Kamenev, A.; Smith, J.; Birchfield, S. Toward low-flying autonomous MAV trail navigation using deep neural networks for environmental awareness. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 4241–4247. [Google Scholar]
- Mascaro, R.; Teixeira, L.; Hinzmann, T.; Siegwart, R.; Chli, M. GOMSF: Graph-Optimization Based Multi-Sensor Fusion for Robust UAV Pose Estimation. In Proceedings of the International Conference on Robotics and Automation (ICRA 2018) IEEE, Brisbane, Australia, 21–26 May 2018. [Google Scholar]
- Kocer, B.B.; Ho, B.; Zhu, X.; Zheng, P.; Farinha, A.; Xiao, F.; Stephens, B.; Wiesemüller, F.; Orr, L.; Kovac, M. Forest drones for environmental sensing and nature conservation. In Proceedings of the 2021 IEEE Aerial Robotic Systems Physically Interacting with the Environment (AIRPHARO), Biograd Na Moru, Croatia, 4–5 October 2021; pp. 1–8. [Google Scholar]
- Griffith, S.; Pradalier, C. Survey Registration for Long-Term Natural Environment Monitoring. J. Field Robot. 2017, 34, 188–208. [Google Scholar] [CrossRef]
- Naseer, T.; Burgard, W.; Stachniss, C. Robust Visual Localization Across Seasons. IEEE Trans. Robot. 2018, 34, 289–302. [Google Scholar] [CrossRef]
- Engel, J.; Koltun, V.; Cremers, D. Direct Sparse Odometry. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 611–625. [Google Scholar] [CrossRef]
- Cole, D.; Newman, P. Using Laser Range Data for 3D SLAM in Outdoor Environments. In Proceedings of the 2006 IEEE International Conference on Robotics and Automation (ICRA 2006), Orlando, FL, USA, 15–19 May 2006; pp. 1556–1563. [Google Scholar] [CrossRef]
- Newman, P.; Cole, D.; Ho, K. Outdoor SLAM Using Visual Appearance and Laser Ranging. In Proceedings of the 2006 IEEE International Conference on Robotics and Automation (ICRA 2006), Orlando, FL, USA, 15–19 May 2006; pp. 1180–1187. [Google Scholar] [CrossRef]
- Ramos, F.T.; Nieto, J.; Durrant-Whyte, H.F. Recognising and Modelling Landmarks to Close Loops in Outdoor SLAM. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation (ICRA 2007), Rome, Italy, 10–14 April 2007; pp. 2036–2041. [Google Scholar] [CrossRef]
- Angeli, A.; Filliat, D.; Doncieux, S.; Meyer, J.A. Fast and Incremental Method for Loop-Closure Detection Using Bags of Visual Words. IEEE Trans. Robot. 2008, 24, 1027–1037. [Google Scholar] [CrossRef]
- Han, L.; Fang, L. MILD: Multi-index Hashing for Appearance Based Loop Closure Detection. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME 2017), Hong Kong, 10–14 July 2017; pp. 139–144. [Google Scholar] [CrossRef]
- Thrun, S.; Montemerlo, M. The Graph SLAM Algorithm with Applications to Large-Scale Mapping of Urban Structures. Int. J. Robot. Res. 2006, 25, 403–429. [Google Scholar] [CrossRef]
- Singh, S.; Kelly, A. Robot Planning in the Space of Feasible Actions: Two Examples. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 1996), Minneapolis, MN, USA, 22–28 April 1996; Volume 4, pp. 3309–3316. [Google Scholar] [CrossRef]
- Pfaff, P.; Triebel, R.; Burgard, W. An Efficient Extension to Elevation Maps for Outdoor Terrain Mapping and Loop Closing. Int. J. Robot. Res. 2007, 26, 217–230. [Google Scholar] [CrossRef]
- Zhang, J.; Singh, S. LOAM: Lidar odometry and mapping in real-time. In Proceedings of the Robotics: Science and Systems (RSS 2014), Berkeley, CA, USA, 12–16 July 2014; Volume 2, pp. 1–9. [Google Scholar]
- Shan, T.; Englot, B. LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4758–4765. [Google Scholar]
- Kim, G.; Kim, A. Scan Context: Egocentric Spatial Descriptor for Place Recognition within 3D Point Cloud Map. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018. [Google Scholar] [CrossRef]
- Ye, H.; Chen, Y.; Liu, M. Tightly Coupled 3D Lidar Inertial Odometry and Mapping. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2019), Montreal, QC, Canada, 20–24 May 2019. [Google Scholar]
- Xu, W.; Zhang, F. FAST-LIO: A Fast, Robust LiDAR-Inertial Odometry Package by Tightly-Coupled Iterated Kalman Filter. IEEE Robot. Autom. Lett. April 2021, 6, 3317–3324. [Google Scholar] [CrossRef]
- Xu, W.; Cai, Y.; He, D.; Lin, J.; Zhang, F. FAST-LIO2: Fast Direct LiDAR-inertial Odometry. arXiv 2021, arXiv:2107.06829. [Google Scholar] [CrossRef]
- Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Daniela, R. LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 5135–5142. [Google Scholar]
- Reinke, A.; Palieri, M.; Morrell, B.; Chang, Y.; Ebadi, K.; Carlone, L.; Agha-Mohammadi, A.A. LOCUS 2.0: Robust and Computationally Efficient Lidar Odometry for Real-Time 3D Mapping. IEEE Robot. Autom. Lett. 2022, 7, 9043–9050. [Google Scholar] [CrossRef]
- Lin, J.; Zhang, F. R 3 LIVE: A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 10672–10678. [Google Scholar]
- Yin, H.; Li, S.; Tao, Y.; Guo, J.; Huang, B. Dynam-SLAM: An Accurate, Robust Stereo Visual-Inertial SLAM Method in Dynamic Environments. IEEE Trans. Robot. 2022, 39, 289–308. [Google Scholar] [CrossRef]
- Wang, Y.; Ma, H. mVIL-Fusion: Monocular Visual-Inertial-LiDAR Simultaneous Localization and Mapping in Challenging Environments. IEEE Robot. Autom. Lett. 2022, 8, 504–511. [Google Scholar] [CrossRef]
- Yuan, Z.; Wang, Q.; Cheng, K.; Hao, T.; Yang, X. SDV-LOAM: Semi-Direct Visual-LiDAR Odometry and Mapping. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 11203–11220. [Google Scholar] [CrossRef] [PubMed]
- He, D.; Xu, W.; Chen, N.; Kong, F.; Yuan, C.; Zhang, F. Point-LIO: Robust High-Bandwidth Light Detection and Ranging Inertial Odometry. Adv. Intell. Syst. 2023, 5, 2200459. [Google Scholar] [CrossRef]
- Vizzo, I.; Guadagnino, T.; Mersch, B.; Wiesmann, L.; Behley, J.; Stachniss, C. KISS-ICP: In Defense of Point-to-Point ICP Simple, Accurate, and Robust Registration If Done the Right Way. IEEE Robot. Autom. Lett. 2023, 8, 1029–1036. [Google Scholar] [CrossRef]
- Karfakis, P.T.; Couceiro, M.S.; Portugal, D. NR5G-SAM: A SLAM Framework for Field Robot Applications Based on 5G New Radio. Sensors 2023, 23, 5354. [Google Scholar] [CrossRef]
- Lu, Z.; Hu, Z.; Uchimura, K. SLAM Estimation in Dynamic Outdoor Environments: A Review. In Intelligent Robotics and Applications; Lecture Notes in Computer Science; Xie, M., Xiong, Y., Xiong, C., Liu, H., Hu, Z., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 255–267. [Google Scholar] [CrossRef]
- Cristóvão, M.P.; Portugal, D.; Carvalho, A.E.; Ferreira, J.F. A LiDAR-Camera-Inertial-GNSS Apparatus for 3D Multimodal Dataset Collection in Woodland Scenarios. Sensors 2023, 23, 6676. [Google Scholar] [CrossRef]
- Tian, Y.; Liu, K.; Ok, K.; Tran, L.; Allen, D.; Roy, N.; How, J.P. Search and rescue under the forest canopy using multiple UAVs. Int. J. Robot. Res. 2020, 39, 1201–1221. [Google Scholar] [CrossRef]
- Agrawal, M.; Konolige, K. Real-Time Localization in Outdoor Environments Using Stereo Vision and Inexpensive GPS. In Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, 20–24 August 2006; Volume 3, pp. 1063–1068. [Google Scholar] [CrossRef]
- Konolige, K.; Agrawal, M.; Bolles, R.C.; Cowan, C.; Fischler, M.; Gerkey, B. Outdoor Mapping and Navigation Using Stereo Vision. In Proceedings of the Experimental Robotics: The 10th International Symposium on Experimental Robotics (ISER 2006), Rio de Janeiro, Brazil, 6–12 July 2006; Khatib, O., Kumar, V., Rus, D., Eds.; Springer Tracts in Advanced Robotics. Springer: Berlin/Heidelberg, Germany, 2008; pp. 179–190. [Google Scholar] [CrossRef]
- Schleicher, D.; Bergasa, L.M.; Ocana, M.; Barea, R.; Lopez, M.E. Real-Time Hierarchical Outdoor SLAM Based on Stereovision and GPS Fusion. IEEE Trans. Intell. Transp. Syst. 2009, 10, 440–452. [Google Scholar] [CrossRef]
- Brand, C.; Schuster, M.J.; Hirschmüller, H.; Suppa, M. Stereo-Vision Based Obstacle Mapping for Indoor/Outdoor SLAM. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014), Chicago, IL, USA, 14–18 September 2014; pp. 1846–1853. [Google Scholar] [CrossRef]
- Brand, C.; Schuster, M.J.; Hirschmüller, H.; Suppa, M. Submap Matching for Stereo-Vision Based Indoor/Outdoor SLAM. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2015), Hamburg, Germany, 28 September–2 October 2015; pp. 5670–5677. [Google Scholar] [CrossRef]
- Moosmann, F.; Stiller, C. Velodyne SLAM. In Proceedings of the 2011 IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 5–9 June 2011; pp. 393–398. [Google Scholar] [CrossRef]
- Abbas, S.M.; Muhammad, A. Outdoor RGB-D SLAM Performance in Slow Mine Detection. In Proceedings of the 7th German Conference on Robotics (ROBOTIK 2012), Munich, Germany, 21–22 May 2012; pp. 1–6. [Google Scholar]
- Portugal, D.; Gouveia, B.D.; Marques, L. A Distributed and Multithreaded SLAM Architecture for Robotic Clusters and Wireless Sensor Networks. In Cooperative Robots and Sensor Networks 2015; Koubâa, A., Martínez-de Dios, J., Eds.; Studies in Computational Intelligence; Springer International Publishing: Cham, Switzerland, 2015; pp. 121–141. [Google Scholar] [CrossRef]
- Sakai, T.; Koide, K.; Miura, J.; Oishi, S. Large-Scale 3D Outdoor Mapping and on-Line Localization Using 3D-2D Matching. In Proceedings of the 2017 IEEE/SICE International Symposium on System Integration (SII), Taipei, Taiwan, 11–14 December 2017; pp. 829–834. [Google Scholar] [CrossRef]
- Lee, Y.J.; Song, J.B.; Choi, J.H. Performance Improvement of Iterative Closest Point-Based Outdoor SLAM by Rotation Invariant Descriptors of Salient Regions. J. Intell. Robot. Syst. 2013, 71, 349–360. [Google Scholar] [CrossRef]
- Suzuki, T.; Kitamura, M.; Amano, Y.; Hashizume, T. 6-DOF Localization for a Mobile Robot Using Outdoor 3D Voxel Maps. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 5737–5743. [Google Scholar] [CrossRef]
- Droeschel, D.; Behnke, S. Efficient Continuous-Time SLAM for 3D Lidar-Based Online Mapping. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2018), Brisbane, Australia, 21–26 May 2018; pp. 5000–5007. [Google Scholar] [CrossRef]
- Harrison, A.; Newman, P. High Quality 3D Laser Ranging under General Vehicle Motion. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2008), Pasadena, CA, USA, 19–23 May 2008; pp. 7–12. [Google Scholar] [CrossRef]
- Levinson, J.; Thrun, S. Robust Vehicle Localization in Urban Environments Using Probabilistic Maps. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation (ICRA 2010), Anchorage, AK, USA, 3–7 May 2010; pp. 4372–4378. [Google Scholar] [CrossRef]
- Simanek, J.; Reinstein, M.; Kubelka, V. Evaluation of the EKF-Based Estimation Architectures for Data Fusion in Mobile Robots. IEEE/ASME Trans. Mech. 2015, 20, 985–990. [Google Scholar] [CrossRef]
- Bernuy, F.; Ruiz Del Solar, J. Semantic Mapping of Large-Scale Outdoor Scenes for Autonomous Off-Road Driving. In Proceedings of the IEEE International Conference on Computer Vision Workshop (ICCVW 2015), Santiago, Chile, 7–13 December 2015; pp. 124–130. [Google Scholar] [CrossRef]
- Boularias, A.; Duvallet, F.; Oh, J.; Stentz, A. Grounding Spatial Relations for Outdoor Robot Navigation. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA 2015), Seattle, WA, USA, 26–30 May 2015; pp. 1976–1982. [Google Scholar] [CrossRef]
- Milford, M.J.; Wyeth, G.F. Mapping a Suburb with a Single Camera Using a Biologically Inspired SLAM System. IEEE Trans. Robot. 2008, 24, 1038–1053. [Google Scholar] [CrossRef]
- Glover, A.J.; Maddern, W.P.; Milford, M.J.; Wyeth, G.F. FAB-MAP + RatSLAM: Appearance-based SLAM for Multiple Times of Day. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2010), Anchorage, AK, USA, 3–7 May 2010; pp. 3507–3512. [Google Scholar] [CrossRef]
- Milford, M.; George, A. Featureless Visual Processing for SLAM in Changing Outdoor Environments. In Field and Service Robotics: Results of the 8th International Conference [Springer Tracts in Advanced Robotics, Volume 92]; Yoshida, K., Tadokoro, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2014; pp. 569–583. Available online: https://link.springer.com/chapter/10.1007/978-3-642-40686-7_38 (accessed on 1 July 2023).
- Mur-Artal, R.; Montiel, J.M.M.; Tardós, J.D. ORB-SLAM: A Versatile and Accurate Monocular SLAM System. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef]
- Mur-Artal, R.; Tardós, J.D. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef]
- Schuster, M.J.; Brand, C.; Hirschmüller, H.; Suppa, M.; Beetz, M. Multi-Robot 6D Graph SLAM Connecting Decoupled Local Reference Filters. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2015), Hamburg, Germany, 28 September–2 October 2015; pp. 5093–5100. [Google Scholar] [CrossRef]
- Rossmann, J.; Schluse, M.; Schlette, C.; Buecken, A.; Krahwinkler, P.; Emde, M. Realization of a Highly Accurate Mobile Robot System for Multi Purpose Precision Forestry Applications. In Proceedings of the International Conference on Advanced Robotics (ICAR 2009), Munich, Germany, 22–26 June 2009; pp. 1–6. [Google Scholar]
- Post, M.A.; Bianco, A.; Yan, X.T. Autonomous Navigation with ROS for a Mobile Robot in Agricultural Fields. In Proceedings of the 14th International Conference on Informatics in Control, Automation and Robotics (ICINCO 2017), Madrid, Spain, 26–28 July 2017; Available online: https://strathprints.strath.ac.uk/61247/ (accessed on 1 July 2023).
- Miettinen, M.; Ohman, M.; Visala, A.; Forsman, P. Simultaneous Localization and Mapping for Forest Harvesters. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2007), Rome, Italy, 10–14 April 2007; pp. 517–522. [Google Scholar] [CrossRef]
- Auat Cheein, F.; Steiner, G.; Perez Paina, G.; Carelli, R. Optimized EIF-SLAM Algorithm for Precision Agriculture Mapping Based on Stems Detection. Comput. Electron. Agric. 2011, 78, 195–207. [Google Scholar] [CrossRef]
- Duarte, M.; dos Santos, F.N.; Sousa, A.; Morais, R. Agricultural Wireless Sensor Mapping for Robot Localization. In Proceedings of the Robot 2015: Second Iberian Robotics Conference, Advances in Intelligent Systems and Computing, Lisbon, Portugal, 19–21 November 2015; Reis, L.P., Moreira, A.P., Lima, P.U., Montano, L., Muñoz-Martinez, V., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 359–370. [Google Scholar] [CrossRef]
- Yang, N.; Wang, R.; Gao, X.; Cremers, D. Challenges in Monocular Visual Odometry: Photometric Calibration, Motion Bias, and Rolling Shutter Effect. IEEE Robot. Autom. Lett. 2018, 3, 2878–2885. [Google Scholar] [CrossRef]
- Aqel, M.O.A.; Marhaban, M.H.; Saripan, M.I.; Ismail, N.B. Review of Visual Odometry: Types, Approaches, Challenges, and Applications. SpringerPlus 2016, 5, 1897. [Google Scholar] [CrossRef] [PubMed]
- Cadena, C.; Carlone, L.; Carrillo, H.; Latif, Y.; Scaramuzza, D.; Neira, J.; Reid, I.; Leonard, J.J. Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age. IEEE Trans. Robot. 2016, 32, 1309–1332. [Google Scholar] [CrossRef]
- Arkin, R.C.; Balch, T. Cooperative Multiagent Robotic Systems. In Artificial Intelligence and Mobile Robots: Case Studies of Successful Robot Systems; MIT Press: Cambridge, MA, USA, 1998; pp. 277–296. [Google Scholar]
- Zhang, J.; Singh, S. Laser-visual-inertial Odometry and Mapping with High Robustness and Low Drift. J. Field Robot. 2018, 35, 1242–1264. [Google Scholar] [CrossRef]
- Hawes, N.; Burbridge, C.; Jovan, F.; Kunze, L.; Lacerda, B.; Mudrova, L.; Young, J.; Wyatt, J.; Hebesberger, D.; Kortner, T.; et al. The STRANDS Project: Long-Term Autonomy in Everyday Environments. IEEE Robot. Autom. Mag. 2017, 24, 146–156. [Google Scholar] [CrossRef]
- Rocha, R.P.; Portugal, D.; Couceiro, M.; Araújo, F.; Menezes, P.; Lobo, J. The CHOPIN project: Cooperation between human and rObotic teams in catastrophic incidents. In Proceedings of the 2013 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR 2013), Linkoping, Sweden, 21–26 October 2013; pp. 1–4. [Google Scholar] [CrossRef]
- Kruijff-Korbayová, I.; Colas, F.; Gianni, M.; Pirri, F.; de Greeff, J.; Hindriks, K.; Neerincx, M.; Ögren, P.; Svoboda, T.; Worst, R. TRADR Project: Long-Term Human-Robot Teaming for Robot Assisted Disaster Response. Künstliche Intell. 2015, 29, 193–201. [Google Scholar] [CrossRef]
- Singh, A.; Krause, A.R.; Guestrin, C.; Kaiser, W.J.; Batalin, M.A. Efficient Planning of Informative Paths for Multiple Robots. In Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI 2007), Hyderabad, India, 6–12 January 2007; Volume 7, pp. 2204–2211. Available online: https://openreview.net/forum?id=ryVLY4G_ZS (accessed on 1 July 2023).
- La, H.M.; Sheng, W.; Chen, J. Cooperative and Active Sensing in Mobile Sensor Networks for Scalar Field Mapping. IEEE Trans. Syst. Man Cybern. Syst. 2015, 45, 1–12. [Google Scholar] [CrossRef]
- Ma, K.C.; Liu, L.; Sukhatme, G.S. An Information-Driven and Disturbance-Aware Planning Method for Long-Term Ocean Monitoring. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Republic of Korea, 9–14 October 2016; pp. 2102–2108. [Google Scholar] [CrossRef]
- Manjanna, S.; Dudek, G. Data-Driven Selective Sampling for Marine Vehicles Using Multi-Scale Paths. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 6111–6117. [Google Scholar] [CrossRef]
- Euler, J.; von Stryk, O. Optimized Vehicle-Specific Trajectories for Cooperative Process Estimation by Sensor-Equipped UAVs. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2017), Marina Bay Sands, Singapore, 29 May–3 June 2017; pp. 3397–3403. [Google Scholar] [CrossRef]
- Merino, L.; Caballero, F.; Martínez-de Dios, J.R.; Maza, I.; Ollero, A. An unmanned aircraft system for automatic forest fire monitoring and measurement. J. Intell. Robot. Syst. 2012, 65, 533–548. [Google Scholar] [CrossRef]
- Ahmad, A.; Walter, V.; Petráček, P.; Petrlík, M.; Báča, T.; Žaitlík, D.; Saska, M. Autonomous aerial swarming in gnss-denied environments with high obstacle density. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2021), Xi’an, China, 30 May–5 June 2021; pp. 570–576. [Google Scholar]
- Couceiro, M.S.; Portugal, D. Swarming in forestry environments: Collective exploration and network deployment. Swarm Intell. Princ. Curr. Algoritm. Methods 2018, 119, 323–344. [Google Scholar]
- Tarapore, D.; Groß, R.; Zauner, K.P. Sparse robot swarms: Moving swarms to real-world applications. Front. Robot. AI 2020, 7, 83. [Google Scholar] [CrossRef]
- Ju, C.; Kim, J.; Seol, J.; Son, H.I. A review on multirobot systems in agriculture. Comput. Electron. Agric. 2022, 202, 107336. [Google Scholar] [CrossRef]
- Martins, G.S.; Ferreira, J.F.; Portugal, D.; Couceiro, M.S. MoDSeM: Modular Framework for Distributed Semantic Mapping. In Proceedings of the UK-RAS Robotics and Autonomous Systems Conference: “Embedded Intelligence: Enabling and Supporting RAS Technologies”, Loughborough University, Loughborough, UK, 24 January 2019; pp. 12–15. [Google Scholar] [CrossRef]
- Martins, G.S.; Ferreira, J.F.; Portugal, D.; Couceiro, M.S. MoDSeM: Towards Semantic Mapping with Distributed Robots. In Proceedings of the 20th Towards Autonomous Robotic Systems Conference, London, UK, 3–5 July 2019; pp. 131–142. [Google Scholar] [CrossRef]
- Rocha, R.; Dias, J.; Carvalho, A. Cooperative Multi-Robot Systems: A Study of Vision-Based 3-D Mapping Using Information Theory. Robot. Auton. Syst. 2005, 53, 282–311. [Google Scholar] [CrossRef]
- Das, G.P.; McGinnity, T.M.; Coleman, S.A.; Behera, L. A Fast Distributed Auction and Consensus Process Using Parallel Task Allocation and Execution. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2011), San Francisco, CA, USA, 25–30 September 2011; pp. 4716–4721. [Google Scholar] [CrossRef]
- Calleja-Huerta, A.; Lamandé, M.; Green, O.; Munkholm, L.J. Impacts of Load and Repeated Wheeling from a Lightweight Autonomous Field Robot on the Physical Properties of a Loamy Sand Soil. Soil Tillage Res. 2023, 233, 105791. [Google Scholar] [CrossRef]
- Batey, T. Soil compaction and soil management—A review. Soil Use Manag. 2009, 25, 335–345. [Google Scholar] [CrossRef]
- Niu, C.; Zauner, K.P.; Tarapore, D. End-to-End Learning for Visual Navigation of Forest Environments. Forests 2023, 14, 268. [Google Scholar] [CrossRef]
- da Silva, D.Q.; dos Santos, F.N.; Sousa, A.J.; Filipe, V.; Boaventura-Cunha, J. Unimodal and Multimodal Perception for Forest Management: Review and Dataset. Computation 2021, 9, 127. [Google Scholar] [CrossRef]
- Jensen, K.; Larsen, M.; Nielsen, S.; Larsen, L.; Olsen, K.; Jørgensen, R. Towards an Open Software Platform for Field Robots in Precision Agriculture. Robotics 2014, 3, 207–234. [Google Scholar] [CrossRef]
- Portugal, D.; Ferreira, J.F.; Couceiro, M.S. Requirements specification and integration architecture for perception in a cooperative team of forestry robots. In Proceedings of the Annual Conference towards Autonomous Robotic Systems, Online, 16 September 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 329–344. [Google Scholar]
Group * | Country | Main References | Research Projects | Main Focus and Applications |
---|---|---|---|---|
FRUC, University of Coimbra | Portugal | [27,32,107,108] | SEMFIRE, CORE, SAFEFOREST | Perception and decision-making for forestry robots. |
RAISE, Nottingham Trent University | UK | [27,32,107,108] | SEMFIRE, CORE, SAFEFOREST | Perception and decision-making for forestry robots. |
Carnegie Mellon University | USA | [29,30,31] | SAFEFOREST | Perception for forestry robots: navigation, mapping, classification and vegetation detection |
CRIIS, INESTEC | Portugal | [33,35,109,110] | SCORPION, BIOTECFOR | Robotics in industry and intelligent systems for agriculture and forestry environments |
Centre for Automation and Robotics (CAR at CSIC-UPM) | Spain | [37,44] | RHEA, CROPS | Robot fleets and swarms for agriculture, crop monitoring. |
Televitis at Universidad de La Rioja | Spain | [40] | VINEROBOT, VineScout | Perception and actuation for agricultural robots for vineyard monitoring. |
Umeå University | Sweden | [51,52,56] | CROPS, SWEEPER | Perception, manipulation, and decision-making for forestry and agricultural robots, including simulation and literature research. |
Wageningen University | Netherlands | [61,65] | CROPS, SWEEPER, SAGA | Perception for agricultural robots: crop/weed classification, plant classification, weed mapping. |
University of Bonn | Germany | [68,69] | Digiforest | Perception for agricultural robots: crop/weed classification using multiple techniques. |
University of Milan | Italy | [72] | CROPS | Perception for agricultural robots: detection of powdery mildew on grapevine leaves. |
IRSTEA | France | [73] | RHEA | Perception for crop monitoring: nitrogen content assessment in wheat. |
University of Liège | Belgium | [74,76] | n/a | Perception for forestry and agricultural robots: discrimination of deciduous tree species and nitrogen content estimation. |
ETH Zurich | Switzerland | [77,78,80,81] | Digiforest, THING | Fully autonomous forest excavators and vegetation detection and classification. |
University of Lincoln | UK | [82] | RASberry, BACCHUS | Fleets of robots for horticulture and crop/weed discrimination for automated weeding. |
Harper Adams University | UK | n/a | L-CAS | Perception and actuation for agricultural robots: vision-guided weed identification and robotic gimbal for spray- or laser-based weeding. |
Aleksandras Stulginskis University | Lithuania | [85] | n/a | Perception for agricultural robots: UAV-based spring wheat monitoring. |
Universitatea Transilvania Brasov | Romania | [86] | n/a | Automation of data collection in farm automation operations. |
Agricultural Institute of Slovenia | Slovenia | [87] | CROPS | Perception for agricultural robots: real-time position of air-assisted sprayers. |
Laval University | Canada | [88,89,90,91] | SNOW | Automated forestry, SLAM and navigation in forest environments. |
Massachusetts Institute of Technology | USA | [92,93] | n/a | Perception for forestry robots: terrain classification and tree stem identification. |
Technical University of Federico Santa María | Chile | [94,95,96] | n/a | Multispectral imagery perception in forests and N-trailers for robotic harvesting. |
Ben-Gurion University | Israel | [61,97,98] | CROPS, SWEEPER | Agricultural harvesting robots, including literature survey. |
University of Sydney | Australia | [99,100,101] | n/a | Tree detection, LiDAR and UAV photogrammetry. |
Sensor Technology | Sensing Type | Advantages | Disadvantages |
---|---|---|---|
RGB camera | imaging sensor | allows for relatively inexpensive high-resolution imaging | does not include depth information |
RGB-D camera | imaging and ranging sensor | relates images to depth values | generally low-resolution to reduce costs |
thermal camera | temperature imaging sensor | temperature readings in image format can improve segmentation and help detect animals | generally low-resolution and more expensive than normal cameras |
hyperspectral sensor | imaging sensor w/ many specialised channels | allows for better segmentation (e.g., using vegetation indices) | expensive and heavy-duty when compared to other imaging techniques |
multispectral camera | imaging sensor w/ some specialised channels | allows for better segmentation (e.g., using vegetation indices); inexpensive | less powerful than its hyperspectral counterpart |
sonar | sound-based range sensor | allows for inexpensive obstacle avoidance | limited detection range and resolution |
LiDAR/LaDAR | laser-based range sensors | allow for precise 3D sensing | relatively expensive and difficult to extract information beyond spatial occupancy |
electronic compass | orientation sensor | allows for partial pose estimation | may suffer from magnetic interference |
inertial sensors | motion/vertical orientation sensors | allow for partial pose estimation | suffer from measurement drift |
GPS/GNSS | absolute positioning sensors | allow for localization and pose estimation | difficult to keep track of satellite signals in remote woodland environments |
Method | Input | Environment | Geometry | Data Structure/Framework |
---|---|---|---|---|
TUPPer-Map [205] | RGB-D | Urban | Mesh | Truncated Signed Distance Field (TSDF) |
Kimera-Multi [206] | RGB-D/PCL | Urban | Mesh | TSDF |
SSC [207] | PCL | Urban | 3D Points | Point Cloud |
MultiLayerMapping [208] | RGB-D | Urban | Voxels | Multi-Layered BGKOctoMap [211] |
Semantic OctoMap [31] | RGB-D/PCL | Forest/Urban | Voxels | OctoMap [210] |
Active MS Mapping [202,209] | Stereo/PCL | Forest/Urban | 3D Models | Factor-Graph |
Ref. | Year | Application | Platform | Input | Percepts | Algorithms |
---|---|---|---|---|---|---|
[65] | 2017 | Agriculture | UAV Swarm | Position of agent, position of other agents, detected weed density, confidence | Weed density map | Model fitting |
[62] | 2013 | Agriculture | UGV | Multispectral Images | Detected hard and soft obstacles, segmented plant parts | CART |
[68] | 2017 | Agriculture | UAV | RGB | Segmented crop and weed sections | Random Forests |
[69] | 2018 | Agriculture | UGV | RGB | Segmented crop and weed sections | CNN |
[70] | 2018 | Agriculture | UGV | RGB, NIR | Segmented crop and weed sections | CNN |
[45] | 2018 | Agriculture | UGV | RGB, crop row model | Segmented crop and weed sections | SVM, LVQ, AES, ODMD |
[85] | 2018 | Agriculture | MAV | Hyperspectral, RGB | Chlorophyll values in given area | Several |
[87] | 2013 | Agriculture | UGV | LRF | Geometry of sprayed plant | Model fitting |
[75] | 2016 | Forestry | UAV | RGB, NIR | Tree species distribution, health status | RF |
[107] | 2022 | Forestry | UGV | LRF, Multispectral Images | Depth registered image, Segmented Image, 3D point clouds with live flammable material | CNN, IPBasic |
[121] | 2007 | Forestry | UGV | LRF, RGB, NDVI, NIR | 2D Grid of traversal costs, ground plane estimate, rigid and soft obstacle classification | Linear Maximum Entropy Classifiers |
[224] | 2016 | Forestry | UAV | RGB | Trail direction | DNN |
[296] | 2020 | Forestry | Two UAVs | 2D LRF, IMU, Altimeter | 2D collaborative map of explored forest area, 3D voxel grid with tree positions | Frontier-based exploration, CSLAM |
[31] | 2022 | Forestry | UAV | IMU, Stereo Cameras, LRF | 3D Semantic Map, Traversability indexes | Multi-sensor Factor Graph SLAM, SegFormer CNN |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ferreira, J.F.; Portugal, D.; Andrada, M.E.; Machado, P.; Rocha, R.P.; Peixoto, P. Sensing and Artificial Perception for Robots in Precision Forestry: A Survey. Robotics 2023, 12, 139. https://doi.org/10.3390/robotics12050139
Ferreira JF, Portugal D, Andrada ME, Machado P, Rocha RP, Peixoto P. Sensing and Artificial Perception for Robots in Precision Forestry: A Survey. Robotics. 2023; 12(5):139. https://doi.org/10.3390/robotics12050139
Chicago/Turabian StyleFerreira, João Filipe, David Portugal, Maria Eduarda Andrada, Pedro Machado, Rui P. Rocha, and Paulo Peixoto. 2023. "Sensing and Artificial Perception for Robots in Precision Forestry: A Survey" Robotics 12, no. 5: 139. https://doi.org/10.3390/robotics12050139
APA StyleFerreira, J. F., Portugal, D., Andrada, M. E., Machado, P., Rocha, R. P., & Peixoto, P. (2023). Sensing and Artificial Perception for Robots in Precision Forestry: A Survey. Robotics, 12(5), 139. https://doi.org/10.3390/robotics12050139