Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (467)

Search Parameters:
Keywords = RGB imagery

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 32127 KiB  
Article
Deep Learning Approach for Studying Forest Types in Restored Karst Rocky Landscapes: A Case Study of Huajiang, China
by Jiaxue Wan, Zhongfa Zhou, Meng Zhu, Jiale Wang, Jiajia Zheng, Changxiang Wang, Xiaopiao Wu and Rongping Liu
Forests 2024, 15(12), 2122; https://doi.org/10.3390/f15122122 (registering DOI) - 1 Dec 2024
Viewed by 185
Abstract
Forest restoration landscapes are vital for restoring native habitats and enhancing ecosystem resilience. However, field monitoring (lasting months to years) in areas with complex surface habitats affected by karst rocky desertification is time-consuming. To address this, forest structural parameters were introduced, and training [...] Read more.
Forest restoration landscapes are vital for restoring native habitats and enhancing ecosystem resilience. However, field monitoring (lasting months to years) in areas with complex surface habitats affected by karst rocky desertification is time-consuming. To address this, forest structural parameters were introduced, and training samples were optimized by excluding fragmented samples and those with a positive case ratio below 30%. The U-Net instance segmentation model in ArcGIS Pro was then applied to classify five forest restoration landscape types: intact forest, agroforestry, planted forest, unmanaged, and managed naturally regenerated forests. The optimized model achieved a 2% improvement in overall accuracy, with unmanaged and intact forests showing the highest increases (7%). Incorporating tree height and age improved the model’s accuracy by 3.5% and 1.9%, respectively, while biomass reduced it by 2.9%. RGB imagery combined with forest height datasets was most effective for agroforestry and intact forests, RGB imagery with aboveground biomass was optimal for unmanaged naturally regenerated forests, and RGB imagery with forest age was most suitable for managed naturally regenerated forests. These findings provide a practical and efficient method for monitoring forest restoration and offer a scientific basis for sustainable forest management in regions with complex topography and fragile ecosystems. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Map showing the location of Hua Jiang. (<b>i</b>) Changes in the forested areas within the study area from 2000 to 2020, and (<b>ii</b>) specific locations of the field sampling points in 2020. (<b>b</b>) Proportions of the sampling points, including including intact forest (IF), naturally regenerating forest with signs of forest management (NRFM), naturally regenerating forest without signs of forest management (NRF), plantation forests (PF), and agroforestry (AF); and (<b>c</b>–<b>f</b>) different evolutionary stages of the forest restoration.</p>
Full article ">Figure 2
<p>Template remote sensing images.</p>
Full article ">Figure 3
<p>Comparison of image preprocessing stages.</p>
Full article ">Figure 4
<p>Dataset annotation workflow.</p>
Full article ">Figure 5
<p>Workflow implemented in ArcGIS Pro 3.2.2.</p>
Full article ">Figure 6
<p>Changes in the overall accuracy. (<b>a</b>) Variations in the accuracy with slice size; and (<b>b</b>) changes in the accuracy with learning rates. The curves are the accuracy fitting curves.</p>
Full article ">Figure 7
<p>Post-processing diagram. (<b>a</b>) Removal of areas of change with sizes less than 385 m<sup>2</sup> using the threshold method; and (<b>b</b>) filling of small holes in change areas using the morphological method. The red box indicates the areas that need to be processed.</p>
Full article ">Figure 8
<p>Comparison of samples before and after optimization. (<b>a</b>,<b>b</b>) Proportions of the sample quantities before and after optimization. The numerical values denote the number of samples. (<b>c</b>,<b>d</b>) Proportions of the sample areas before and after optimization. The numerical values denote the sample area in hectares.</p>
Full article ">Figure 9
<p>Comparison of accuracy before and after sample optimization. (<b>a</b>–<b>e</b>) Comparison of the accuracy, recall, and F1 scores across the five types of forest restoration landscapes. The curves are the spline curves of the data values. (<b>f</b>) Changes in the overall accuracy; (<b>g</b>) training loss function; and (<b>h</b>) validation loss function. The dots mark the positions where the pre- and post-optimization values intersect.</p>
Full article ">Figure 10
<p>Results of the forest restoration landscape recognition model. (<b>a</b>) Accuracies, recall rates, and F1 scores for the five types of forest restoration landscapes across the four datasets; the red bars in the figure (<b>a</b>) represent values greater than 0.8, while the blue bars represent values less than 0.8; and (<b>b</b>) overall accuracy for each dataset.</p>
Full article ">Figure 11
<p>Detailed comparison of results for different datasets. (<b>a</b>) Original images; (<b>b</b>) label masks; (<b>c</b>) estimated results using the RGB dataset; (<b>d</b>) estimated results using the RGB + CH dataset; (<b>e</b>) estimated results using the RGB + FAGE; and (<b>f</b>) estimated results using the RGB + FAGB dataset.</p>
Full article ">Figure 12
<p>Model generalization accuracy. (<b>a</b>) Confusion matrix for the forest restoration landscape types. The values are normalized based on the percentage of the total for each class. (<b>b</b>) Precision, recall, and F1 scores for the five types of forest restoration landscapes.</p>
Full article ">Figure 13
<p>Examples of predictions for the different types of forest restoration. The first row shows Tianditu images, the second row shows photographs from field sampling points, and the third row shows the corresponding classification results of the deep learning model.</p>
Full article ">
14 pages, 1862 KiB  
Article
Evaluating Water Turbidity in Small Lakes Within the Taihu Lake Basin, Eastern China, Using Consumer-Grade UAV RGB Cameras
by Dong Xie, Yunjie Qiu, Xiaojie Chen, Yuchen Zhao and Yuqing Feng
Drones 2024, 8(12), 710; https://doi.org/10.3390/drones8120710 - 28 Nov 2024
Viewed by 445
Abstract
Small lakes play an essential role in maintaining regional ecosystem stability and water quality. However, turbidity in these lakes is increasingly influenced by anthropogenic activities, which presents a challenge for traditional monitoring methods. This study explores the feasibility of using consumer-grade UAVs equipped [...] Read more.
Small lakes play an essential role in maintaining regional ecosystem stability and water quality. However, turbidity in these lakes is increasingly influenced by anthropogenic activities, which presents a challenge for traditional monitoring methods. This study explores the feasibility of using consumer-grade UAVs equipped with RGB cameras to monitor water turbidity in small lakes within the Taihu Lake Basin of eastern China. By collecting RGB imagery and in situ turbidity measurements, we developed and validated models for turbidity prediction. RGB band indices were used in combination with three machine learning models, namely Interpretable Feature Transformation Regression (IFTR), Random Forest (RF), and Extreme Gradient Boosting (XGBoost). Results showed that models utilizing combinations of the R, G, B, and ln(R) bands achieved the highest accuracy, with the IFTR model demonstrating the best performance (R² = 0.816, RMSE = 3.617, MAE = 2.997). The study confirms that consumer-grade UAVs can be an effective, low-cost tool for high-resolution turbidity monitoring in small lakes, providing valuable insights for sustainable water quality management. Future research should investigate advanced algorithms and additional spectral features to further enhance prediction accuracy and adaptability. Full article
Show Figures

Figure 1

Figure 1
<p>Locations of the two studied small lakes in Wujiang District, Suzhou City, Taihu Lake Basin.</p>
Full article ">Figure 2
<p>Schematic structure of the IFTR model.</p>
Full article ">Figure 3
<p>Orthophoto images (<b>a</b>–<b>c</b>) and (<b>g</b>–<b>i</b>) and kernel density estimations (<b>d</b>–<b>f</b>) and (<b>j</b>–<b>l</b>) of predicted values using the IFTR, RF, and XGBoost models’ turbidity estimates in Beinanyingdang Lake and Dayuedang Lake. A window size of 10 × 10 pixels was used to estimate the kernel density.</p>
Full article ">Figure 4
<p>Correlation between turbidity estimates from IFTR (<b>a</b>), RF (<b>b</b>), and XGBoost (<b>c</b>) models and in situ turbidity measurements in Beinanyingdang Lake and Dayuedang Lake. Twenty-two field sampling data points (dots) collected in December were used to validate robustness and accuracy.</p>
Full article ">
18 pages, 14095 KiB  
Article
Automated Stock Volume Estimation Using UAV-RGB Imagery
by Anurupa Goswami, Unmesh Khati, Ishan Goyal, Anam Sabir and Sakshi Jain
Sensors 2024, 24(23), 7559; https://doi.org/10.3390/s24237559 - 27 Nov 2024
Viewed by 186
Abstract
Forests play a critical role in the global carbon cycle, with carbon storage being an important carbon pool in the terrestrial ecosystem with tree crown size serving as a versatile ecological indicator influencing factors such as tree growth, wind resistance, shading, and carbon [...] Read more.
Forests play a critical role in the global carbon cycle, with carbon storage being an important carbon pool in the terrestrial ecosystem with tree crown size serving as a versatile ecological indicator influencing factors such as tree growth, wind resistance, shading, and carbon sequestration. They help with habitat function, herbicide application, temperature regulation, etc. Understanding the relationship between tree crown area and stock volume is crucial, as it provides a key metric for assessing the impact of land-use changes on ecological processes. Traditional ground-based stock volume estimation using DBH (Diameter at Breast Height) is labor-intensive and often impractical. However, high-resolution UAV (unmanned aerial vehicle) imagery has revolutionized remote sensing and computer-based tree analysis, making forest studies more efficient and interpretable. Previous studies have established correlations between DBH, stock volume and above-ground biomass, as well as between tree crown area and DBH. This research aims to explore the correlation between tree crown area and stock volume and automate stock volume and above-ground biomass estimation by developing an empirical model using UAV-RGB data, making forest assessments more convenient and time-efficient. The study site included a significant number of training and testing sites to ensure the performance level of the developed model. The findings underscore a significant association, demonstrating the potential of integrating drone technology with traditional forestry techniques for efficient stock volume estimation. The results highlight a strong exponential correlation between crown area and stem stock volume, with a coefficient of determination of 0.67 and mean squared error (MSE) of 0.0015. The developed model, when applied to estimate cumulative stock volume using drone imagery, demonstrated a strong correlation with an R2 of 0.75. These results emphasize the effectiveness of combining drone technology with traditional forestry methods to achieve more precise and efficient stock volume estimation and, hence, automate the process. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>A flowchart of the methodology for automated stock volume estimation.</p>
Full article ">Figure 2
<p>Study area location map. This map shows the study area of the Indian Institute of Technology, which is situated in Indore city in the state of Madhya Pradesh.</p>
Full article ">Figure 3
<p>Drone data acquisition flowchart.</p>
Full article ">Figure 4
<p>Data collection using the drone. The figure (<b>a</b>) shows the setup of the communication box for the real-time tracking of the drone. Figure (<b>b</b>) shows the flight planning using BlueFire Touch software of v4.1.9047.1979 for the drone. During this stage, the waypoints for the drone flight were decided.</p>
Full article ">Figure 5
<p>Three different study sites were identified during this study. The locations where the drone imagery of the tree-covered areas was captured are shown here.</p>
Full article ">Figure 6
<p>Images of tree canopies captured by the drone at sites 1, 2, and 3.</p>
Full article ">Figure 7
<p>Field work performed for collecting DBH values. Figure (<b>a</b>) shows the geographic location data collection carried out using GARMIN eTrex 10, and Figure (<b>b</b>) depicts the DBH measurement of the tree trunks.</p>
Full article ">Figure 8
<p>Flowchart for tree crown delineation methodology.</p>
Full article ">Figure 9
<p>(<b>a</b>) and (<b>b</b>) show the measurement of DBH computed from the filed observations. (<b>a</b>) shows site 1, and (<b>b</b>) shows site 2.</p>
Full article ">Figure 10
<p>Tree crown delineation from the drone imagery for site 1 (<b>a</b>) and tree crown delineation from the drone imagery for site 2 (<b>b</b>).</p>
Full article ">Figure 10 Cont.
<p>Tree crown delineation from the drone imagery for site 1 (<b>a</b>) and tree crown delineation from the drone imagery for site 2 (<b>b</b>).</p>
Full article ">Figure 11
<p>(<b>a</b>) shows the relationship between the tree trunk circumference and crown area. (<b>b</b>) shows the relationship between the DBH and tree crown.</p>
Full article ">Figure 12
<p>(<b>a</b>) shows the relationship between tree crown and stem volume. (<b>b</b>) shows the crown area data filtered. (<b>c</b>) shows the training data points for the model. (<b>d</b>) shows the testing data points of the model developed.</p>
Full article ">Figure 13
<p>(<b>a</b>) shows the values of stock volume from field measurements on the x-axis and for the model on the y-axis. (<b>b</b>) shows the values of the AGB from field measurements on the x-axis and for the model on the y axis. (<b>c</b>) shows the values of ton carbon from field measurements on the x-axis and the model in the y-axis (<b>d</b>) shows the values of tons/ha CO<sub>2</sub> emissions from field measurements on the x-axis and for the model on the y-axis.</p>
Full article ">Figure 14
<p>These plots show the accuracy assessment of the model by plotting the volumes computed by the model and the field measurements, respectively.</p>
Full article ">Figure 15
<p>These plots show the accuracy assessment of the model by plotting the AGB computed by the model and the field measurements, respectively.</p>
Full article ">Figure 16
<p>Validating the model for computing the cumulative stock volume.</p>
Full article ">
18 pages, 3847 KiB  
Article
EC-WAMI: Event Camera-Based Pose Optimization in Remote Sensing and Wide-Area Motion Imagery
by Isaac Nkrumah, Maryam Moshrefizadeh, Omar Tahri, Erik Blasch, Kannappan Palaniappan and Hadi AliAkbarpour
Sensors 2024, 24(23), 7493; https://doi.org/10.3390/s24237493 (registering DOI) - 24 Nov 2024
Viewed by 413
Abstract
In this paper, we present EC-WAMI, the first successful application of neuromorphic event cameras (ECs) for Wide-Area Motion Imagery (WAMI) and Remote Sensing (RS), showcasing their potential for advancing Structure-from-Motion (SfM) and 3D reconstruction across diverse imaging scenarios. ECs, which detect asynchronous [...] Read more.
In this paper, we present EC-WAMI, the first successful application of neuromorphic event cameras (ECs) for Wide-Area Motion Imagery (WAMI) and Remote Sensing (RS), showcasing their potential for advancing Structure-from-Motion (SfM) and 3D reconstruction across diverse imaging scenarios. ECs, which detect asynchronous pixel-level brightness changes, offer key advantages over traditional frame-based sensors such as high temporal resolution, low power consumption, and resilience to dynamic lighting. These capabilities allow ECs to overcome challenges such as glare, uneven lighting, and low-light conditions that are common in aerial imaging and remote sensing, while also extending UAV flight endurance. To evaluate the effectiveness of ECs in WAMI, we simulate event data from RGB WAMI imagery and integrate them into SfM pipelines for camera pose optimization and 3D point cloud generation. Using two state-of-the-art SfM methods, namely, COLMAP and Bundle Adjustment for Sequential Imagery (BA4S), we show that although ECs do not capture scene content like traditional cameras, their spike-based events, which only measure illumination changes, allow for accurate camera pose recovery in WAMI scenarios even in low-framerate(5 fps) simulations. Our results indicate that while BA4S and COLMAP provide comparable accuracy, BA4S significantly outperforms COLMAP in terms of speed. Moreover, we evaluate different feature extraction methods, showing that the deep learning-based LIGHTGLUE descriptor consistently outperforms traditional handcrafted descriptors by providing improved reliability and accuracy of event-based SfM. These results highlight the broader potential of ECs in remote sensing, aerial imaging, and 3D reconstruction beyond conventional WAMI applications. Our dataset will be made available for public use. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Block diagram of our EC-WAMI pipeline. RGB event data are simulated using an event simulator and frames are reconstructed with a frame reconstructor. The reconstructed frames are then fed into an SfM algorithm for camera pose optimization and 3D reconstruction.</p>
Full article ">Figure 2
<p>Conventional (e.g., COLMAP) versus BA4S SfM pipelines [<a href="#B15-sensors-24-07493" class="html-bibr">15</a>]. In the conventional SfM pipeline (<b>a</b>), camera poses and outliers are simultaneously estimated using RANSAC, and metadata may be used as extra constraints in optimization. In BA4S (<b>b</b>), camera metadata are used directly, and there is no model estimation, explicit outlier elimination, or RANSAC filtering of mismatches.</p>
Full article ">Figure 3
<p>RGB sample frames in our WAMI datasets: Top, ABQ-215; Bottom, DIRSIG-RIT. The second column illustrates events simulated with Video-to-Event (V2E), while the third column shows the reconstructed frames from events simulated with Event-to-Video (E2VID).</p>
Full article ">Figure 4
<p>Errors of the recovered camera poses when using BA4S and COLMAP on the two aerial image sequences: (<b>a</b>) positional error (percentage in meters, using (<a href="#FD7-sensors-24-07493" class="html-disp-formula">7</a>)) and (<b>b</b>) angular error (degrees, using (<a href="#FD9-sensors-24-07493" class="html-disp-formula">9</a>)). LIGHTGLUE outperforms other feature descriptors in terms of both the positional and angular error metrics; in contrast, ORB shows comparatively lower performance and trails in terms of both measures.</p>
Full article ">Figure 5
<p>Recovered camera trajectories compared to ground truth for the eABQ-215 and eRIT datasets consisting of frames extracted from simulated event data. The recovered 3D trajectories closely match the ground truth.</p>
Full article ">Figure 6
<p>Recovered camera trajectories in 2D, showing the differences between BA4S and COLMAP on aerial images. The top row shows the trajectory and difference for the eABQ-215 dataset, while the second row illustrates the same results for the eRIT dataset. These graphs correspond to the 3D trajectories shown in <a href="#sensors-24-07493-f005" class="html-fig">Figure 5</a>. BA4S displays a smoother trajectory, while COLMAP has a more jagged trajectory. There is a small difference between the ground truth and the optimized camera poses based on the reconstructed frames generated from simulated WAMI event data, demonstrating the effectiveness of our approach; <a href="#sensors-24-07493-t002" class="html-table">Table 2</a> provides additional details. In (<b>a</b>), the optimized trajectory for the eABQ-215 dataset; (<b>b</b>), the difference between the optimized trajectory for eABQ-215 and the ground truth; (<b>c</b>), the optimized trajectory for the eRIT dataset; (<b>d</b>), the difference between the optimized trajectory for eRIT and the ground truth.</p>
Full article ">Figure 7
<p>Sparse and dense 3D point clouds produced using the two SfM pipelines on the eABQ-215 and eRIT event camera datasets. The top two rows show the point clouds for eABQ-215 and the bottom two rows show the point clouds for eRIT. The results demonstrate that Gaussian splatting (GS) produces high-quality 3D scene reconstructions on both event datasets.</p>
Full article ">Figure 8
<p>Camera pose recovery with a traditional RGB camera in a challenging illumination scenario: (<b>a</b>) shows a simulated RGB image with perturbations generated by applying techniques from [<a href="#B51-sensors-24-07493" class="html-bibr">51</a>] to the RGB image in <a href="#sensors-24-07493-f003" class="html-fig">Figure 3</a>a, while (<b>b</b>,<b>c</b>) depict failed camera trajectory recovery when the perturbed traditional image was used as input for BA4S and COLMAP. These results underscore the limitations of traditional cameras in recovering pose under challenging illumination conditions.</p>
Full article ">
19 pages, 53371 KiB  
Article
Efficient UAV-Based Automatic Classification of Cassava Fields Using K-Means and Spectral Trend Analysis
by Apinya Boonrang, Pantip Piyatadsananon and Tanakorn Sritarapipat
AgriEngineering 2024, 6(4), 4406-4424; https://doi.org/10.3390/agriengineering6040250 - 22 Nov 2024
Viewed by 386
Abstract
High-resolution images captured by Unmanned Aerial Vehicles (UAVs) play a vital role in precision agriculture, particularly in evaluating crop health and detecting weeds. However, the detailed pixel information in these images makes classification a time-consuming and resource-intensive process. Despite these challenges, UAV imagery [...] Read more.
High-resolution images captured by Unmanned Aerial Vehicles (UAVs) play a vital role in precision agriculture, particularly in evaluating crop health and detecting weeds. However, the detailed pixel information in these images makes classification a time-consuming and resource-intensive process. Despite these challenges, UAV imagery is increasingly utilized for various agricultural classification tasks. This study introduces an automatic classification method designed to streamline the process, specifically targeting cassava plants, weeds, and soil classification. The approach combines K-means unsupervised classification with spectral trend-based labeling, significantly reducing the need for manual intervention. The method ensures reliable and accurate classification results by leveraging color indices derived from RGB data and applying mean-shift filtering parameters. Key findings reveal that the combination of the blue (B) channel, Visible Atmospherically Resistant Index (VARI), and color index (CI) with filtering parameters, including a spatial radius (sp) = 5 and a color radius (sr) = 10, effectively differentiates soil from vegetation. Notably, using the green (G) channel, excess red (ExR), and excess green (ExG) with filtering parameters (sp = 10, sr = 20) successfully distinguishes cassava from weeds. The classification maps generated by this method achieved high kappa coefficients of 0.96, with accuracy levels comparable to supervised methods like Random Forest classification. This technique offers significant reductions in processing time compared to traditional methods and does not require training data, making it adaptable to different cassava fields captured by various UAV-mounted optical sensors. Ultimately, the proposed classification process minimizes manual intervention by incorporating efficient pre-processing steps into the classification workflow, making it a valuable tool for precision agriculture. Full article
(This article belongs to the Special Issue Computer Vision for Agriculture and Smart Farming)
Show Figures

Figure 1

Figure 1
<p>Study area of cassava fields captured by the DJI Phantom 4 Pro sensor.</p>
Full article ">Figure 2
<p>Study area of cassava fields captured by the DJI Phantom 4 sensor.</p>
Full article ">Figure 3
<p>Proposed classification process.</p>
Full article ">Figure 4
<p>Boxplot of the spectral value of classes.</p>
Full article ">Figure 5
<p>Kappa coefficient of K-means, RF, and the proposed classification process.</p>
Full article ">Figure 6
<p>Classification results using the proposed classification process: (<b>a</b>) Plot 1, showing results from an area with patchy weeds and thin weed patches; (<b>b</b>) Plot 5, showing results from an area with fewer weed patches and dense weed coverage; (<b>c</b>) Plot 8, showing results from an area with varying light illumination.</p>
Full article ">
28 pages, 75722 KiB  
Article
An Integrated Approach to Riverbed Morphodynamic Modeling Using Remote Sensing Data
by Matteo Bozzano, Francesco Varni, Monica De Martino, Alfonso Quarati, Nicoletta Tambroni and Bianca Federici
J. Mar. Sci. Eng. 2024, 12(11), 2055; https://doi.org/10.3390/jmse12112055 - 13 Nov 2024
Viewed by 562
Abstract
River inlets, deltas, and estuaries represent delicate ecosystems highly susceptible to climate change impacts. While significant progress has been made in understanding the morphodynamics of these environments in recent decades, the development of models still requires thorough testing and data integration. In this [...] Read more.
River inlets, deltas, and estuaries represent delicate ecosystems highly susceptible to climate change impacts. While significant progress has been made in understanding the morphodynamics of these environments in recent decades, the development of models still requires thorough testing and data integration. In this context, remote sensing emerges as a potent tool, providing crucial data and the ability to monitor temporal changes. In this paper, an integrated approach combining remote sensing and morphodynamic modeling is proposed to assess river systems comprehensively. By utilizing multispectral or RGB optical imagery from satellites or UAVs for river classification and remotely derived bathymetry, echo sounder data for ground truth, and photogrammetric modeling of emerged areas, we outline a procedure to create an integrated and continuous digital terrain model (DTM) of a riverbed, paying particular attention to the wet–dry interface. This method enables us to identify the river centerline, its width, and its slope variations. Additionally, by applying a linear morphodynamic model that considers the spatial variability of river morphology commonly found in estuarine environments, it is possible to predict the wavelength and migration rate of sediment bars. This approach has been successfully applied to recreate the DTM and monitor the morphodynamics of the seaward reach of the Roya River (Italy). Full article
(This article belongs to the Special Issue Remote Sensing and GIS Applications for Coastal Morphodynamic Systems)
Show Figures

Figure 1

Figure 1
<p>Geolocalization of the Roya basin and its seaward reach, selected as the study area.</p>
Full article ">Figure 2
<p>RGB aerial orthophoto of the study area (<b>on the left</b>). SBES survey conducted within the seaward reach of the Roya River (<b>on the right</b>).</p>
Full article ">Figure 3
<p>Photogrammetric survey within the seaward reach of the Roya River.</p>
Full article ">Figure 4
<p>Workflow for the computation of riverbed DTM from remotely sensed data.</p>
Full article ">Figure 5
<p>Geometrical scheme of the river with a micro-tidal mouth adopted in the formulation of [<a href="#B30-jmse-12-02055" class="html-bibr">30</a>].</p>
Full article ">Figure 6
<p>Workflow of the morphodynamic approach integrated with the DTM.</p>
Full article ">Figure 7
<p>Result of the riverbed classification (<b>a</b>), and RDB (<b>b</b>).</p>
Full article ">Figure 8
<p>Scatter density plots representing the results of the exponential RDB model calibration (<b>a</b>) and validation (<b>b</b>). The color of the points reflects their density, with yellow indicating areas with a high concentration of data points and blue showing sparsely distributed points.</p>
Full article ">Figure 9
<p>River centerline and photogrammetric points of bare soil within the buffer zone around water areas (<b>a</b>); water surface derived by extending the elevation assigned to each point of the centerline, as the elevation of the nearest photogrammetric point of bare soil near the water areas (<b>b</b>).</p>
Full article ">Figure 10
<p>Elevation data corresponding to bare soil, directly extracted from the photogrammetric digital surface model (<b>a</b>), bare soil, and water points within a buffer around vegetation, used for the interpolation in vegetated areas (<b>b</b>), and resulting DTM for the vegetated areas (<b>c</b>).</p>
Full article ">Figure 11
<p>DTM outcome: 2D visualization (<b>a</b>), and 3D visualization (<b>b</b>).</p>
Full article ">Figure 12
<p>Elevation of the mean bottom profile (dots) and elevation of a constant slope profile (dashed line) best fitting real data.</p>
Full article ">Figure 13
<p>Channel width (blue line) plotted versus the longitudinal coordinate of the river axis with origin at the river inlet. The orange line represents the mean width value (144.66 m). The equation at the top shows the exponential width variation best fitting real data.</p>
Full article ">Figure 14
<p>Temporal behavior of the maximum amplitude of the bed perturbations for the first two lateral modes (mode 1 = alternate bars, mode 2 = central bars).</p>
Full article ">Figure 15
<p>Bedform pattern of the bottom of the Roya River, obtained by subtracting the mean bottom elevation from the real one.</p>
Full article ">Figure 16
<p>Bottom profile along the two river-banks, obtained from the DTM (<b>a</b>), versus the model’s simulated one (<b>b</b>).</p>
Full article ">Figure 17
<p>Comparison of the bottom profile along both river-banks, extracted from the DTM (<b>a</b>) and simulated by the model (<b>b</b>). Both plots display a similar alternating pattern of pools (red regions) and scours (blue regions).</p>
Full article ">Figure 18
<p>DTM’s correspondence with the reference surveys along two transects (a) and (b). The upper section of each transect displays the riverbed classes. Each line corresponds to a specific survey method. The red line represents the resulting DTM, forming a continuous surface that accurately approximates the survey data.</p>
Full article ">Figure 19
<p>Qualitative example of a DTM reliability map, showing the different classes (water, bare soil, vegetation) with different color patterns, corresponding to specific data sources (RDB + water surface, photogrammetry, interpolation) characterized by different reliability. The DTM resulting from the proposed procedure is in the background.</p>
Full article ">
26 pages, 19104 KiB  
Article
Accurately Segmenting/Mapping Tobacco Seedlings Using UAV RGB Images Collected from Different Geomorphic Zones and Different Semantic Segmentation Models
by Qianxia Li, Zhongfa Zhou, Yuzhu Qian, Lihui Yan, Denghong Huang, Yue Yang and Yining Luo
Plants 2024, 13(22), 3186; https://doi.org/10.3390/plants13223186 - 13 Nov 2024
Viewed by 410
Abstract
The tobacco seedling stage is a crucial period for tobacco cultivation. Accurately extracting tobacco seedlings from satellite images can effectively assist farmers in replanting, precise fertilization, and subsequent yield estimation. However, in complex Karst mountainous areas, it is extremely challenging to accurately segment [...] Read more.
The tobacco seedling stage is a crucial period for tobacco cultivation. Accurately extracting tobacco seedlings from satellite images can effectively assist farmers in replanting, precise fertilization, and subsequent yield estimation. However, in complex Karst mountainous areas, it is extremely challenging to accurately segment tobacco plants due to a variety of factors, such as the topography, the planting environment, and difficulties in obtaining high-resolution image data. Therefore, this study explores an accurate segmentation model for detecting tobacco seedlings from UAV RGB images across various geomorphic partitions, including dam and hilly areas. It explores a family of tobacco plant seedling segmentation networks, namely, U-Net, U-Net++, Linknet, PSPNet, MAnet, FPN, PAN, and DeepLabV3+, using the Hill Seedling Tobacco Dataset (HSTD), the Dam Area Seedling Tobacco Dataset (DASTD), and the Hilly Dam Area Seedling Tobacco Dataset (H-DASTD) for model training. To validate the performance of the semantic segmentation models for crop segmentation in the complex cropping environments of Karst mountainous areas, this study compares and analyzes the predicted results with the manually labeled true values. The results show that: (1) the accuracy of the models in segmenting tobacco seedling plants in the dam area is much higher than that in the hilly area, with the mean values of mIoU, PA, Precision, Recall, and the Kappa Coefficient reaching 87%, 97%, 91%, 85%, and 0.81 in the dam area and 81%, 97%, 72%, 73%, and 0.73 in the hilly area, respectively; (2) The segmentation accuracies of the models differ significantly across different geomorphological zones; the U-Net segmentation results are optimal for the dam area, with higher values of mIoU (93.83%), PA (98.83%), Precision (93.27%), Recall (96.24%), and the Kappa Coefficient (0.9440) than those of the other models; in the hilly area, the U-Net++ segmentation performance is better than that of the other models, with mIoU and PA of 84.17% and 98.56%, respectively; (3) The diversity of tobacco seedling samples affects the model segmentation accuracy, as shown by the Kappa Coefficient, with H-DASTD (0.901) > DASTD (0.885) > HSTD (0.726); (4) With regard to the factors affecting missed segregation, although the factors affecting the dam area and the hilly area are different, the main factors are small tobacco plants (STPs) and weeds for both areas. This study shows that the accurate segmentation of tobacco plant seedlings in dam and hilly areas based on UAV RGB images and semantic segmentation models can be achieved, thereby providing new ideas and technical support for accurate crop segmentation in Karst mountainous areas. Full article
Show Figures

Figure 1

Figure 1
<p>Study area map. (<b>a</b>) China; (<b>b</b>) Guizhou province; (<b>c</b>) Zhenfeng and Anlong Counties; (<b>d</b>) UAV imagery data of the study area in the Zhenfeng and (<b>e</b>) Anlong Counties; (<b>f</b>) planting environments, growth, management of pests and diseases, and replanting of tobacco plants in the two study areas.</p>
Full article ">Figure 2
<p>Acquisition and stitching of drone data. (<b>A</b>) UAS: (<b>a</b>) DJI Mavic 2 Pro drone equipped with (<b>b</b>) Hasselblad camera and (<b>c</b>) drone batteries, handle, tablet, etc.; (<b>B</b>) RTK ground control point data acquisition: (<b>a</b>) satellite signal receiver and (<b>b</b>) smart handbook (interactive interface); (<b>C</b>) Drone image stitching software—Pix4D: (<b>a</b>) Digital Orthophoto Map (DOM) and (<b>b</b>) Digital Surface Model (DSM).</p>
Full article ">Figure 3
<p>Data set construction process.</p>
Full article ">Figure 4
<p>Histogram of the accuracy of the different models in the hilly area.</p>
Full article ">Figure 5
<p>Histogram of the accuracy of the different models in the dam area.</p>
Full article ">Figure 6
<p>Histogram of the accuracy of the different models in the hilly dam area.</p>
Full article ">Figure 7
<p>Visualization of the segmentation results of the eight deep learning models trained on the HSTD. I—tobacco plants of different sizes and the plots are flat; II—small tobacco plants and the plots are flat; III—small tobacco plants and the plots are more fragmented; IV—large tobacco plants and the plots are flat; V—large tobacco plants and the plots are fragmented; VI—small tobacco plants and the plots are flat; VII—large tobacco plants, the plots are flat with more weeds in the background, and the ridges of the soil are wide, high, and shaded; VIII—large tobacco plants, the plots are broken, and the ridges are wide, high, and shaded.</p>
Full article ">Figure 8
<p>Visualization of the segmentation results of the eight deep learning models trained on the DASTD. I—presence of road, lots of weeds, and large tobacco plants; II—presence of trees, small amount of weeds, and small tobacco plants; III—small tobacco plants; IV—presence of trees, road, and other debris; V—large tobacco plants; VI—presence of trees and small tobacco plants; VII—large tobacco plants; VIII—presence of large trees, large tobacco plants, and brighter light in the region.</p>
Full article ">Figure 9
<p>Visualization of the segmentation results of the eight deep learning models trained on the H-DASTD. I—presence of road, lots of weeds, and large tobacco plants; II—presence of trees, small amount of weeds, and small tobacco plants; III—small tobacco plants; IV—presence of trees, road, and other debris; V—large tobacco plants and fragmented plot; VI—small tobacco plants and flat plot; VII—large tobacco plants, flatter plot, more weeds in the background, and soil ridge is wide and high with shading; VIII—large tobacco plants, fragmented plots, and wide and high ridges with shading.</p>
Full article ">Figure 10
<p>Histograms of mis-segmented tobacco plants in dam and hilly areas: (<b>a</b>) histogram of mis-segmented tobacco plants; (<b>b</b>) histogram of mis-segmented tobacco plants.</p>
Full article ">Figure 11
<p>Map of factors affecting missed segmentation for different models in the dam area.</p>
Full article ">Figure 12
<p>Map of factors influencing the mis-segmentation of different models in the dam area.</p>
Full article ">Figure 13
<p>Map of factors affecting the mis-segmentation rates of different models in the hilly area.</p>
Full article ">Figure 14
<p>Map of factors influencing the mis-segmentation of different models in the hilly area.</p>
Full article ">
25 pages, 9546 KiB  
Article
Fusion of UAV-Acquired Visible Images and Multispectral Data by Applying Machine-Learning Methods in Crop Classification
by Zuojun Zheng, Jianghao Yuan, Wei Yao, Paul Kwan, Hongxun Yao, Qingzhi Liu and Leifeng Guo
Agronomy 2024, 14(11), 2670; https://doi.org/10.3390/agronomy14112670 - 13 Nov 2024
Viewed by 585
Abstract
The sustainable development of agriculture is closely related to the adoption of precision agriculture techniques, and accurate crop classification is a fundamental aspect of this approach. This study explores the application of machine learning techniques to crop classification by integrating RGB images and [...] Read more.
The sustainable development of agriculture is closely related to the adoption of precision agriculture techniques, and accurate crop classification is a fundamental aspect of this approach. This study explores the application of machine learning techniques to crop classification by integrating RGB images and multispectral data acquired by UAVs. The study focused on five crops: rice, soybean, red bean, wheat, and corn. To improve classification accuracy, the researchers extracted three key feature sets: band values and vegetation indices, texture features extracted from a grey-scale co-occurrence matrix, and shape features. These features were combined with five machine learning models: random forest (RF), support vector machine (SVM), k-nearest neighbour (KNN) based, classification and regression tree (CART) and artificial neural network (ANN). The results show that the Random Forest model consistently outperforms the other models, with an overall accuracy (OA) of over 97% and a significantly higher Kappa coefficient. Fusion of RGB images and multispectral data improved the accuracy by 1–4% compared to using a single data source. Our feature importance analysis showed that band values and vegetation indices had the greatest impact on classification results. This study provides a comprehensive analysis from feature extraction to model evaluation, identifying the optimal combination of features to improve crop classification and providing valuable insights for advancing precision agriculture through data fusion and machine learning techniques. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>Schematic of the location of the study area. (Yellow stars are specific locations of data collection sites.)</p>
Full article ">Figure 2
<p>Phenological periods of crops.</p>
Full article ">Figure 3
<p>Orthophotos of visible-light and multispectral images.</p>
Full article ">Figure 4
<p>Technology roadmap.</p>
Full article ">Figure 5
<p>Schematic diagram of visible and multispectral image samples.</p>
Full article ">Figure 6
<p>Schematic representation of random point locations used for accuracy assessment.</p>
Full article ">Figure 7
<p>Visible light multi-scale segmentation results.</p>
Full article ">Figure 8
<p>Multiscale segmentation results of multispectral images.</p>
Full article ">Figure 9
<p>Producer accuracy for different categories and programs.</p>
Full article ">Figure 10
<p>User accuracy for different categories and programs.</p>
Full article ">Figure 11
<p>The 15 most important features of random forest based on P1–P21 (P2, P9, and P16 have only 11 features). The vertical coordinates indicate the different features. The horizontal coordinate indicates the contribution of the feature to the reduction of impurities in the random forest model. The larger the value, the greater the importance of the feature to the model decision. IFF is a fusion feature of RGB and multispectral bands.</p>
Full article ">Figure 12
<p>Classification results of the optimal scheme based on the random forest model.</p>
Full article ">Figure 13
<p>Confusion matrix generated based on the optimal solution.</p>
Full article ">
27 pages, 5800 KiB  
Article
Multimodal Deep Learning Integration of Image, Weather, and Phenotypic Data Under Temporal Effects for Early Prediction of Maize Yield
by Danial Shamsuddin, Monica F. Danilevicz, Hawlader A. Al-Mamun, Mohammed Bennamoun and David Edwards
Remote Sens. 2024, 16(21), 4043; https://doi.org/10.3390/rs16214043 - 30 Oct 2024
Viewed by 840
Abstract
Maize (Zea mays L.) has been shown to be sensitive to temperature deviations, influencing its yield potential. The development of new maize hybrids resilient to unfavourable weather is a desirable aim for crop breeders. In this paper, we showcase the development of [...] Read more.
Maize (Zea mays L.) has been shown to be sensitive to temperature deviations, influencing its yield potential. The development of new maize hybrids resilient to unfavourable weather is a desirable aim for crop breeders. In this paper, we showcase the development of a multimodal deep learning model using RGB images, phenotypic, and weather data under temporal effects to predict the yield potential of maize before or during anthesis and silking stages. The main objective of this study was to assess if the inclusion of historical weather data, maize growth captured through imagery, and important phenotypic traits would improve the predictive power of an established multimodal deep learning model. Evaluation of the model performance when training from scratch showed its ability to accurately predict ~89% of hybrids with high-yield potential and demonstrated enhanced explanatory power compared with previously published models. Shapley Additive explanations (SHAP) analysis indicated the top influential features include plant density, hybrid placement in the field, date to anthesis, parental line, temperature, humidity, and solar radiation. Including weather historical data was important for model performance, significantly enhancing the predictive and explanatory power of the model. For future research, the use of the model can move beyond maize yield prediction by fine-tuning the model on other crop data, serving as a potential decision-making tool for crop breeders to determine high-performing individuals from diverse crop types. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Schematic overview of the multimodal deep learning model development.</p>
Full article ">Figure 2
<p>Overview schematic of the image preprocessing workflow: (<b>A</b>) Orthomosaic generation; Metashape (v2.0.1, Agisoft) was used to combine images taken from UAV platforms. (<b>B</b>) Visualisation of shapefiles on orthomosaics using QGIS (v3.28.7-Firenze, Free Software Foundation); shapefiles were generated using the plotshpcreate package on R software (v2022.12.0 + 353, Posit Software) and imported alongside their respective orthomosaic for each year in QGIS. (<b>C</b>) Cropped images generated from QGIS; individual plot images were cropped from aligned shapefiles using QGIS (v3.28.7-Firenze, Free Software Foundation) and were further rotated horizontally and split into four quadrants using a custom Python script. A1–2 and B1–2 represent replicates of the same hybrid lineage. (<b>D</b>) Final processed image; the quadrants corresponding to one replicate stacked on top of each other for all timepoints. Processed images will serve as inputs to the image-specific model, where the pixel values will used for feature extraction and model learning.</p>
Full article ">Figure 3
<p>Overview of the layers used within each modality. All modules generate a target prediction. Repeated layers are visualised by a dotted line: (<b>A</b>) tab-DNN architecture with embedding layers for each categorical feature. (<b>B</b>) XResNet18 Image-DNN architecture; this framework uses a modified ResNet architecture with 18 layers, including a self-attention layer. (<b>C</b>) DenseNet-121 Image-DNN architecture; This framework uses four dense blocks, each with 6, 12, 24, 16 dense layers, respectively. Each dense block is connected by a transition layer.</p>
Full article ">Figure 4
<p>Overview of the fusion module used within the multimodal architecture. The last layer weights from each module are concatenated, then undergo linear and non-linear transformations before generating a yield prediction.</p>
Full article ">Figure 5
<p>Maize hybrid and yield distribution throughout the years visualised using a boxplot portraying ground truth yields by year (<b>A</b>), boxplot portraying ground truth yields by environment and year (<b>B</b>), and a Venn diagram portraying the number of maize hybrids grown each year.</p>
Full article ">Figure 6
<p>Tab-DNN model holdout results: (<b>A</b>) Comparison between predictions (<span class="html-italic">y</span>-axis) vs. ground truth yields (<span class="html-italic">x</span>-axis) for each sample per year, measured at tonne/ha. (<b>B</b>) Comparison between prediction error (<span class="html-italic">y</span>-axis) vs. environment (<span class="html-italic">x</span>-axis) for each sample per year. Calculated by subtracting the predicted yield from ground truth. (<b>C</b>) Confusion matrix showing the percentage of correctly predicted hybrids that were classified as either high (&gt;10 tonne/ha), medium (6–9.99 tonne/ha), or low (&lt;6 tonne/ha). Ground truth classifications are represented on the <span class="html-italic">y</span>-axis, with predicted performance on the <span class="html-italic">x</span>-axis.</p>
Full article ">Figure 7
<p>The top ten features based on SHAP values: (<b>A</b>) represents the top ten features and their average impact on tab-DNN model holdout predictions by mean absolute SHAP value. (<b>B</b>) represents the top ten features and their directional impact on tab-DNN model holdout predictions by SHAP value per sample. Features are further described in <a href="#app1-remotesensing-16-04043" class="html-app">Supplementary Table S2</a>.</p>
Full article ">Figure 8
<p>Image-DNN model holdout results: (<b>A</b>–<b>C</b>) Comparison between predictions (<span class="html-italic">y</span>-axis) vs. ground truth yields (<span class="html-italic">x</span>-axis) for each sample by environment and year, measured as tonne/ha. (<b>D</b>) Comparison between predictions (<span class="html-italic">y</span>-axis) vs. ground-true yields (<span class="html-italic">x</span>-axis) for each sample per year, measured as tonne/ha. (<b>E</b>) Comparison between hybrid count and yield distribution between ground truth and prediction values. (<b>F</b>) Comparison between prediction error (<span class="html-italic">y</span>-axis) vs. environment (<span class="html-italic">x</span>-axis) for each sample per year. Calculated by subtracting the predicted yield from ground truth. (<b>G</b>) Confusion matrix showing the percentage of correctly predicted hybrids that were classified as either high (&gt;10 tonne/ha), medium (6–9.99 tonne/ha), or low (&lt;6 tonne/ha). Ground truth classifications are represented on the <span class="html-italic">y</span>-axis, with predicted performance on the <span class="html-italic">x</span>-axis.</p>
Full article ">Figure 9
<p>Yield predictions were generated from the multimodal architecture when trained from scratch. Comparison between predictions (<span class="html-italic">y</span>-axis) vs. ground truth yields (<span class="html-italic">x</span>-axis) for each sample by environment and year, measured at tonne/ha using tab-DNN predictions (<b>A</b>), Image-DNN predictions (<b>B</b>), fusion module predictions (<b>C</b>), and weighted predictions (<b>D</b>). (<b>E</b>) Confusion matrix showing the percentage of correctly predicted hybrids that were classified as either high (&gt;10 tonne/ha), medium (6–9.99 tonne/ha), or low (&lt;6 tonne/ha). Ground truth classifications are represented on the <span class="html-italic">y</span>-axis, with predicted performance on the <span class="html-italic">x</span>-axis. (<b>F</b>) Classification error plot showcasing hybrids predicted as medium yield but with a ground truth class of high yield (light purple), and hybrids predicted as medium yield but with a ground truth class of low yield (mustard).</p>
Full article ">Figure 10
<p>Yield predictions generated from the multimodal architecture when using pretrained modules. Comparison between predictions (<span class="html-italic">y</span>-axis) vs. ground truth yields (<span class="html-italic">x</span>-axis) for each sample by environment and year, measured at tonne/ha using tab-DNN predictions (<b>A</b>), Image-DNN predictions (<b>B</b>), fusion module predictions (<b>C</b>), and weighted predictions (<b>D</b>). (<b>E</b>) Confusion matrix showing the percentage of correctly predicted hybrids that were classified as either high (&gt;10 tonne/ha), medium (6–9.99 tonne/ha), or low (&lt;6 tonne/ha). Ground truth classifications represented on the <span class="html-italic">y</span>-axis, with predicted performance on the <span class="html-italic">x</span>-axis. (<b>F</b>) Classification error plot showcasing hybrids predicted as medium yield but with a ground truth class of high yield (light purple), and hybrids predicted as medium yield but with a ground truth class of low yield (mustard).</p>
Full article ">
20 pages, 9894 KiB  
Article
Estimation of Strawberry Canopy Volume in Unmanned Aerial Vehicle RGB Imagery Using an Object Detection-Based Convolutional Neural Network
by Min-Seok Gang, Thanyachanok Sutthanonkul, Won Suk Lee, Shiyu Liu and Hak-Jin Kim
Sensors 2024, 24(21), 6920; https://doi.org/10.3390/s24216920 - 28 Oct 2024
Viewed by 640
Abstract
Estimating canopy volumes of strawberry plants can be useful for predicting yields and establishing advanced management plans. Therefore, this study evaluated the spatial variability of strawberry canopy volumes using a ResNet50V2-based convolutional neural network (CNN) model trained with RGB images acquired through manual [...] Read more.
Estimating canopy volumes of strawberry plants can be useful for predicting yields and establishing advanced management plans. Therefore, this study evaluated the spatial variability of strawberry canopy volumes using a ResNet50V2-based convolutional neural network (CNN) model trained with RGB images acquired through manual unmanned aerial vehicle (UAV) flights equipped with a digital color camera. A preprocessing method based on the You Only Look Once v8 Nano (YOLOv8n) object detection model was applied to correct image distortions influenced by fluctuating flight altitude under a manual maneuver. The CNN model was trained using actual canopy volumes measured using a cylindrical case and small expanded polystyrene (EPS) balls to account for internal plant spaces. Estimated canopy volumes using the CNN with flight altitude compensation closely matched the canopy volumes measured with EPS balls (nearly 1:1 relationship). The model achieved a slope, coefficient of determination (R2), and root mean squared error (RMSE) of 0.98, 0.98, and 74.3 cm3, respectively, corresponding to an 84% improvement over the conventional paraboloid shape approximation. In the application tests, the canopy volume map of the entire strawberry field was generated, highlighting the spatial variability of the plant’s canopy volumes, which is crucial for implementing site-specific management of strawberry crops. Full article
(This article belongs to the Special Issue Feature Papers in Smart Agriculture 2024)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Manual UAV flight over the strawberry experimental field. (<b>b</b>) RGB images at a resolution of 1920 × 1080 pixels were extracted and resized from the UAV video frames captured during manual UAV flight.</p>
Full article ">Figure 2
<p>A flowchart of the object detection-based image preprocessing to resize the images using the ratio of the number of counted plants and missing plants to the average number of plants and missing plants across all images (Equations (1)–(3)).</p>
Full article ">Figure 3
<p>(<b>a</b>) Cropped and resized ROI image of individual sample plants. (<b>b</b>) Sample plant images centered on a 512 × 512-pixel mulch background.</p>
Full article ">Figure 4
<p>Description of the CNN model for estimating canopy volume, adopted from Gang et al. [<a href="#B34-sensors-24-06920" class="html-bibr">34</a>] with modifications. The model utilizes a pre-trained ResNet50V2 [<a href="#B55-sensors-24-06920" class="html-bibr">55</a>] with ImageNet [<a href="#B56-sensors-24-06920" class="html-bibr">56</a>] as the backbone, with two convolutional layers used for preprocessing, followed by a fully connected layer for regression.</p>
Full article ">Figure 5
<p>An acrylic cylindrical case with a strawberry plant filled with EPS balls to calculate the volume difference between the number of balls with and without plants.</p>
Full article ">Figure 6
<p>An overview of the development and testing process for estimating strawberry canopy volume in UAV RGB imagery using an object detection-based CNN.</p>
Full article ">Figure 7
<p>(<b>a</b>) Comparison of canopy volumes estimated from the linear regression model using a paraboloid shape and the actual canopy volumes measured using EPS balls. (<b>b</b>) Comparison of canopy volumes estimated from the developed model using RGB test dataset images and measured canopy volumes with height compensation using the object detection model. The color symbols represent the values for each strawberry variety. The dashed line shows the regression line.</p>
Full article ">Figure 8
<p>Canopy volumes estimated from the developed model using RGB test dataset images and measured canopy volumes without height compensation using the object detection model. The color symbols indicate values for each strawberry variety. The dashed line shows the regression line.</p>
Full article ">Figure 9
<p>Comparison of canopy volumes estimated from the developed model and the actual canopy volumes measured using EPS balls, (<b>a</b>) when 100% of canopy volume converted from canopy fullness level was used as target value and (<b>b</b>) when a 50/50 mix of converted canopy volume and canopy volume measured using EPS balls was used as target value. The color indices denote the values for each strawberry variety. The dashed lines show the regression lines.</p>
Full article ">Figure 10
<p>A part of the canopy volume distribution map in the entire field.</p>
Full article ">Figure 11
<p>Canopy volume distribution of the Brilliance variety from 2 to 23 February 2024.</p>
Full article ">Figure 12
<p>Canopy volume distribution of the Medallion variety from 2 to 23 February 2024.</p>
Full article ">Figure 13
<p>Box plots of weekly canopy volumes of the sampled plants measured using the EPS balls: (<b>a</b>) Brilliance variety; (<b>b</b>) Medallion variety for 40 sampled plants.</p>
Full article ">
18 pages, 11083 KiB  
Article
Influence of Spatial Scale Effect on UAV Remote Sensing Accuracy in Identifying Chinese Cabbage (Brassica rapa subsp. Pekinensis) Plants
by Xiandan Du, Zhongfa Zhou and Denghong Huang
Agriculture 2024, 14(11), 1871; https://doi.org/10.3390/agriculture14111871 - 23 Oct 2024
Viewed by 618
Abstract
The exploration of the impact of different spatial scales on the low-altitude remote sensing identification of Chinese cabbage (Brassica rapa subsp. Pekinensis) plants offers important theoretical reference value in balancing the accuracy of plant identification with work efficiency. This study focuses [...] Read more.
The exploration of the impact of different spatial scales on the low-altitude remote sensing identification of Chinese cabbage (Brassica rapa subsp. Pekinensis) plants offers important theoretical reference value in balancing the accuracy of plant identification with work efficiency. This study focuses on Chinese cabbage plants during the rosette stage; RGB images were obtained by drones at different flight heights (20 m, 30 m, 40 m, 50 m, 60 m, and 70 m). Spectral sampling analysis was conducted on different ground backgrounds to assess their separability. Based on the four commonly used vegetation indices for crop recognition, the Excess Green Index (ExG), Red Green Ratio Index (RGRI), Green Leaf Index (GLI), and Excess Green Minus Excess Red Index (ExG-ExR), the optimal index was selected for extraction. Image processing methods such as frequency domain filtering, threshold segmentation, and morphological filtering were used to reduce the impact of weed and mulch noise on recognition accuracy. The recognition results were vectorized and combined with field data for the statistical verification of accuracy. The research results show that (1) the ExG can effectively distinguish between soil, mulch, and Chinese cabbage plants; (2) images of different spatial resolutions differ in the optimal type of frequency domain filtering and convolution kernel size, and the threshold segmentation effect also varies; (3) as the spatial resolution of the imagery decreases, the optimal window size for morphological filtering also decreases, accordingly; and (4) at a flight height of 30 m to 50 m, the recognition effect is the best, achieving a balance between recognition accuracy and coverage efficiency. The method proposed in this paper is beneficial for agricultural growers and managers in carrying out precision planting management and planting structure optimization analysis and can aid in the timely adjustment of planting density or layout to improve land use efficiency and optimize resource utilization. Full article
(This article belongs to the Special Issue Application of UAVs in Precision Agriculture—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the study area. Different growth conditions: (<b>a</b>) uneven growth, (<b>b</b>) generally poor growth, (<b>c</b>) generally good growth. Different background characteristics: (<b>d</b>) covered with white plastic mulch (with and without water adhesion), (<b>e</b>) plants with multiple features (including connected plant and yellow and green leaves).</p>
Full article ">Figure 2
<p>Research technology roadmap.</p>
Full article ">Figure 3
<p>Spectral curve of the image. In this figure, the three curves representing red, green, and blue correspond to the red, green, and blue bands, respectively. The X-axis represents the sampling distance, which refers to the data measured along a line segment (profile line) drawn on the image; the Y-axis represents the spectral value at each point along the profile line. Note: (<b>a</b>) plants with green and yellow leaves; (<b>b</b>) the soil spectral curve; (<b>c</b>) the spectral curve of the mulch film background; (<b>d</b>) plant with green leaves and soil relationship; (<b>e</b>) plant with yellow leaves and soil relationship; (<b>f</b>) plant and mulch film.</p>
Full article ">Figure 4
<p>Comparison of different morphological filtering results. Note: (<b>b</b>) represents the “Otsu” processing result, while (<b>a</b>,<b>c</b>) represent the results of “closing” and “opening” operations based on Otsu, respectively. <span class="html-fig-inline" id="agriculture-14-01871-i052"><img alt="Agriculture 14 01871 i052" src="/agriculture/agriculture-14-01871/article_deploy/html/images/agriculture-14-01871-i052.png"/></span> The red box indicates the noise of the image; <span class="html-fig-inline" id="agriculture-14-01871-i053"><img alt="Agriculture 14 01871 i053" src="/agriculture/agriculture-14-01871/article_deploy/html/images/agriculture-14-01871-i053.png"/></span> the black box indicates the conjoined area between plants.</p>
Full article ">
23 pages, 16662 KiB  
Article
Evaluating Burn Severity and Post-Fire Woody Vegetation Regrowth in the Kalahari Using UAV Imagery and Random Forest Algorithms
by Madeleine Gillespie, Gregory S. Okin, Thoralf Meyer and Francisco Ochoa
Remote Sens. 2024, 16(21), 3943; https://doi.org/10.3390/rs16213943 - 23 Oct 2024
Viewed by 723
Abstract
Accurate burn severity mapping is essential for understanding the impacts of wildfires on vegetation dynamics in arid savannas. The frequent wildfires in these biomes often cause topkill, where the vegetation experiences above-ground combustion but the below-ground root structures survive, allowing for subsequent regrowth [...] Read more.
Accurate burn severity mapping is essential for understanding the impacts of wildfires on vegetation dynamics in arid savannas. The frequent wildfires in these biomes often cause topkill, where the vegetation experiences above-ground combustion but the below-ground root structures survive, allowing for subsequent regrowth post-burn. Investigating post-fire regrowth is crucial for maintaining ecological balance, elucidating fire regimes, and enhancing the knowledge base of land managers regarding vegetation response. This study examined the relationship between bush burn severity and woody vegetation post-burn coppicing/regeneration events in the Kalahari Desert of Botswana. Utilizing UAV-derived RGB imagery combined with a Random Forest (RF) classification algorithm, we aimed to enhance the precision of burn severity mapping at a fine spatial resolution. Our research focused on a 1 km2 plot within the Modisa Wildlife Reserve, extensively burnt by the Kgalagadi Transfrontier Fire of 2021. The UAV imagery, captured at various intervals post-burn, provided detailed orthomosaics and canopy height models, facilitating precise land cover classification and burn severity assessment. The RF model achieved an overall accuracy of 79.71% and effectively identified key burn severity indicators, including green vegetation, charred grass, and ash deposits. Our analysis revealed a >50% probability of woody vegetation regrowth in high-severity burn areas six months post-burn, highlighting the resilience of these ecosystems. This study demonstrates the efficacy of low-cost UAV photogrammetry for fine-scale burn severity assessment and provides valuable insights into post-fire vegetation recovery, thereby aiding land management and conservation efforts in savannas. Full article
Show Figures

Figure 1

Figure 1
<p>Kgalagadi Transfrontier Fire (KTF) extent and location in Botswana. Modisa is indicated by a red star in the left panel of the figure. The natural color satellite imagery of the Kgalagadi Transfrontier Fire in the left panel was acquired by the National Aeronautics and Space Administration’s (NASA, Washington, DC, USA) Moderate Resolution Imaging Spectroradiometer (MODIS, Washinton, DC, USA) from its Aqua satellite on 8 September 2021 at a 250-m resolution.</p>
Full article ">Figure 2
<p>The top left corner depicts the 1 sq. km post-burn plot of land that this study primarily focused on. The bottom panel offers a closer look at the burn impacts within the plot of land. The top right corner displays the location of the study site in Botswana, Modisa and is indicated by a red star.</p>
Full article ">Figure 3
<p>Flow chart showing the steps of the burn severity classification model along with the datasets and software used. R: red; G: green; B: blue; GLCM: gray-level co-occurrence matrix; UAS: unmanned aerial system.</p>
Full article ">Figure 4
<p>Visualizations of land cover classification schema and their corresponding burn severity rankings.</p>
Full article ">Figure 5
<p>Original RGB drone images (<b>left</b>) and the manually classified land cover classifications (<b>right</b>).</p>
Full article ">Figure 6
<p>Visual comparison of 12-h post-burn imagery and 6-month post-burn imagery. Woody vegetation regrowth visualization is defined and compared to herbaceous cover, as indicated by the red box outlines. Regrowth was determined based on patch regrowth rather than analysis at the pixel level.</p>
Full article ">Figure 7
<p><b>Top Panel</b>: Original drone image 12 h post burn (<b>left</b>) and the Random Forest model-predicted land cover classification map (<b>right</b>). Three outlined regions (A, B, C) are indicated. <b>Bottom Panels</b>: The zoomed-in regions from the model-predicted map and the original RGB map for better visualization.</p>
Full article ">Figure 8
<p>Random Forest classification results reclassified to represent burn severity rankings.</p>
Full article ">Figure 9
<p>Confusion matrix for Random Forest classification of burn severity. Each cell shows the proportion of observations predicted versus the actual observed categories, highlighting the model’s precision and misclassification rates. Numerical values and gradient color of the cells represent the normalized value of correct pixel predictions.</p>
Full article ">Figure 10
<p>Feature importance within the Random Forest classification model. CHM = Canopy Height Model; RGB = Red band, green band, blue band; GCC = Green Chromatic Coordinate; CI = Char Index; max_diff = Max Difference Index; EGI = Excessive Greenness Index; BI = Brightness Index.</p>
Full article ">Figure 11
<p>Woody vegetation survival and regrowth. This figure presents the probability of survival and regrowth in woody vegetation at 6 months and 2.5 years post-burn across the 1000 derived Monte Carlo outputs. Mean probabilities and standard deviations are calculated for each category. Wider violin plots indicate a higher likelihood of regrowth, while narrower plots suggest a lower likelihood.</p>
Full article ">Figure 12
<p>Sample of RGB images used within the manual classification dataset and their corresponding CHM in meters. White spots within the CHM are indicative of taller vegetation.</p>
Full article ">
22 pages, 11338 KiB  
Article
Estimating Carbon Stock in Unmanaged Forests Using Field Data and Remote Sensing
by Thomas Leditznig and Hermann Klug
Remote Sens. 2024, 16(21), 3926; https://doi.org/10.3390/rs16213926 - 22 Oct 2024
Viewed by 856
Abstract
Unmanaged forest ecosystems play a critical role in addressing the ongoing climate and biodiversity crises. As there is no commercial interest in monitoring the health and development of such inaccessible habitats, low-cost assessment approaches are needed. We used a method combining RGB imagery [...] Read more.
Unmanaged forest ecosystems play a critical role in addressing the ongoing climate and biodiversity crises. As there is no commercial interest in monitoring the health and development of such inaccessible habitats, low-cost assessment approaches are needed. We used a method combining RGB imagery acquired using an Unmanned Aerial Vehicle (UAV), Sentinel-2 data, and field surveys to determine the carbon stock of an unmanaged forest in the UNESCO World Heritage Site wilderness area Dürrenstein-Lassingtal in Austria. The entry-level consumer drone (DJI Mavic Mini) and freely available Sentinel-2 multispectral datasets were used for the evaluation. We merged the Sentinel-2 derived vegetation index NDVI with aerial photogrammetry data and used an orthomosaic and a Digital Surface Model (DSM) to map the extent of woodland in the study area. The Random Forest (RF) machine learning (ML) algorithm was used to classify land cover. Based on the acquired field data, the average carbon stock per hectare of forest was determined to be 371.423 ± 51.106 t of CO2 and applied to the ML-generated class Forest. An overall accuracy of 80.8% with a Cohen’s kappa value of 0.74 was achieved for the land cover classification, while the carbon stock of the living above-ground biomass (AGB) was estimated with an accuracy within 5.9% of field measurements. The proposed approach demonstrated that the combination of low-cost remote sensing data and field work can predict above-ground biomass with high accuracy. The results and the estimation error distribution highlight the importance of accurate field data. Full article
Show Figures

Figure 1

Figure 1
<p>A map of the study area.</p>
Full article ">Figure 2
<p>Sample plots for the in situ measurements.</p>
Full article ">Figure 3
<p>The generated DSM and orthomosaic with a resolution of 3 cm.</p>
Full article ">Figure 4
<p>NDVI indices with a resolution of 10 m.</p>
Full article ">Figure 5
<p>Training samples.</p>
Full article ">Figure 6
<p>The reference dataset and validation points (blue) for the Image Classification Wizard.</p>
Full article ">Figure 7
<p>Tree species found in the field plots.</p>
Full article ">Figure 8
<p>The classification raster of the study area.</p>
Full article ">Figure 9
<p>A comparison of the land cover classification and the orthomosaic.</p>
Full article ">Figure 10
<p>The Q-Q plot of the distribution of the carbon stock estimation errors.</p>
Full article ">
25 pages, 17434 KiB  
Article
Using UAV RGB Images for Assessing Tree Species Diversity in Elevation Gradient of Zao Mountains
by Thi Cam Nhung Tran, Maximo Larry Lopez Caceres, Sergi Garcia i Riera, Marco Conciatori, Yoshiki Kuwabara, Ching-Ying Tsou and Yago Diez
Remote Sens. 2024, 16(20), 3831; https://doi.org/10.3390/rs16203831 - 15 Oct 2024
Viewed by 827
Abstract
Vegetation biodiversity in mountainous regions is controlled by altitudinal gradients and their corresponding microclimate. Higher temperatures, shorter snow cover periods, and high variability in the precipitation regime might lead to changes in vegetation distribution in mountains all over the world. In this study, [...] Read more.
Vegetation biodiversity in mountainous regions is controlled by altitudinal gradients and their corresponding microclimate. Higher temperatures, shorter snow cover periods, and high variability in the precipitation regime might lead to changes in vegetation distribution in mountains all over the world. In this study, we evaluate vegetation distribution along an altitudinal gradient (1334–1667 m.a.s.l.) in the Zao Mountains, northeastern Japan, by means of alpha diversity indices, including species richness, the Shannon index, and the Simpson index. In order to assess vegetation species and their characteristics along the mountain slope selected, fourteen 50 m × 50 m plots were selected at different altitudes and scanned with RGB cameras attached to Unmanned Aerial Vehicles (UAVs). Image analysis revealed the presence of 12 dominant tree and shrub species of which the number of individuals and heights were validated with fieldwork ground truth data. The results showed a significant variability in species richness along the altitudinal gradient. Species richness ranged from 7 to 11 out of a total of 12 species. Notably, species such as Fagus crenata, despite their low individual numbers, dominated the canopy area. In contrast, shrub species like Quercus crispula and Acer tschonoskii had high individual numbers but covered smaller canopy areas. Tree height correlated well with canopy areas, both representing tree size, which has a strong relationship with species diversity indices. Species such as F. crenata, Q. crispula, Cornus controversa, and others have an established range of altitudinal distribution. At high altitudes (1524–1653 m), the average shrubs’ height is less than 4 m, and the presence of Abies mariesii is negligible because of high mortality rates caused by a severe bark beetle attack. These results highlight the complex interactions between species abundance, canopy area, and altitude, providing valuable insights into vegetation distribution in mountainous regions. However, species diversity indices vary slightly and show some unusually low values without a clear pattern. Overall, these indices are higher at lower altitudes, peak at mid-elevations, and decrease at higher elevations in the study area. Vegetation diversity indices did not show a clear downward trend with altitude but depicted a vegetation composition at different altitudes as controlled by their surrounding environment. Finally, UAVs showed their significant potential for conducting large-scale vegetation surveys reliably and in a short time, with low costs and low manpower. Full article
(This article belongs to the Special Issue Biomass Remote Sensing in Forest Landscapes II)
Show Figures

Figure 1

Figure 1
<p>The location of the study area in the Zao Mountains. Site 1 (mixed forest); Site 2 (transition from mix to monoculture forest); Site 3 (monoculture).</p>
Full article ">Figure 2
<p>The orthomosaics were generated using raw RGB photos in Metashape software v2.1.3.</p>
Full article ">Figure 3
<p>The figure shows the 3D model of Site 1 was generated from the DPC.</p>
Full article ">Figure 4
<p>The 3D Models of Plot 4 with 5 directions, facilitating vegetation visualization.</p>
Full article ">Figure 5
<p>The Canopy Height Models (CHMs) were generated using 3D Models with the software Global Mapper v21.1.</p>
Full article ">Figure 6
<p>An example for one of the posters that were used for fieldwork purposes.</p>
Full article ">Figure 7
<p>Fourteen sample plots were set up in the study area regarding the increase in elevation.</p>
Full article ">Figure 8
<p>Workflow in this study.</p>
Full article ">Figure 9
<p>The number of individuals and the canopy area of dominant species in the 14 plots along the altitudinal gradient.</p>
Full article ">Figure 9 Cont.
<p>The number of individuals and the canopy area of dominant species in the 14 plots along the altitudinal gradient.</p>
Full article ">Figure 10
<p>Change in tree species composition at different altitude layers within the study area.</p>
Full article ">Figure 11
<p>Change in alpha-diversity indices in the plots along the altitudinal gradient (1336–1667 m).</p>
Full article ">
15 pages, 3920 KiB  
Article
Monitoring and Optimization of Potato Growth Dynamics under Different Nitrogen Forms and Rates Using UAV RGB Imagery
by Yanran Ye, Liping Jin, Chunsong Bian, Jiangang Liu and Huachun Guo
Agronomy 2024, 14(10), 2257; https://doi.org/10.3390/agronomy14102257 - 29 Sep 2024
Viewed by 770
Abstract
The temporal dynamics of canopy growth are closely related to the accumulation and distribution of plant dry matter. Recently, unmanned aerial vehicles (UAVs) equipped with various sensors have been increasingly adopted in crop growth monitoring. In this study, two potato varieties were used [...] Read more.
The temporal dynamics of canopy growth are closely related to the accumulation and distribution of plant dry matter. Recently, unmanned aerial vehicles (UAVs) equipped with various sensors have been increasingly adopted in crop growth monitoring. In this study, two potato varieties were used as materials, and treated with different combinations of nitrogen forms (nitrate and ammonium) and application rates (0, 150, and 300 kg ha−1). A canopy development model was then constructed using low-cost time-series RGB imagery acquired by UAV. The objectives of this study were to quantify the variation in canopy development parameters under different nitrogen treatments and to explore the model parameters that represent the dynamics of plant dry matter accumulation, as well as those that contribute significantly to yield. The results showed that, except for the thermal time to canopy senescence (t2), other parameters of the potato canopy development model exhibited varying degrees of variation under different nitrogen treatments. The model parameters were more sensitive to nitrogen forms, such as ammonium and nitrate, than to application rates. The integral area (At) under the canopy development curve had a direct effect on plant dry matter accumulation (path coefficient of 0.78), and the two were significantly positively correlated (Pearson correlation coefficient of 0.93). Integral area at peak flowering (AtII) was significantly correlated with yield for both single and mixed potato varieties, having the greatest effect on yield (total effect of 1.717). In conclusion, UAV-acquired time-series RGB imagery could effectively quantify the variation of potato canopy development parameters under different nitrogen treatments and monitor the dynamic changes in plant dry matter accumulation. The regulation of canopy development parameters is of great importance and practical value for optimizing nitrogen management strategies and improving yield. Full article
Show Figures

Figure 1

Figure 1
<p>Layout of field experiment at Zhangjiakou test site, China, and a diagram of plot splitting and background removal. V1: ‘Shapody’, V2: ‘Zhongshu 18’; N0: control, N1: 150 kg ha<sup>−1</sup> KNO<sub>3</sub>, N2: 300 kg ha<sup>−1</sup> KNO<sub>3</sub>, N3: 150 kg ha<sup>−1</sup> (NH<sub>4</sub>)<sub>2</sub>SO<sub>4</sub>, N4: 300 kg ha<sup>−1</sup> (NH<sub>4</sub>)<sub>2</sub>SO<sub>4</sub>.</p>
Full article ">Figure 2
<p>Fitted curve of potato canopy dynamic development (adapted from Khan [<a href="#B18-agronomy-14-02257" class="html-bibr">18</a>]). V<sub>max</sub>: maximum canopy cover; A<sub>1</sub>, A<sub>2,</sub> A<sub>3</sub>: areas under individual curve segments; t<sub>m1</sub>: inflection point during canopy expansion; t<sub>1</sub>: duration of canopy expansion; t<sub>2</sub>: onset of canopy senescence; t<sub>e</sub>: the time point when canopy cover is zero.</p>
Full article ">Figure 3
<p>Fitted curves of canopy development of two potato varieties under different nitrogen application rates and forms. (<b>a</b>) ‘Shapody’; (<b>b</b>) ‘Zhongshu 18’. N0: control, N1: 150 kg ha<sup>−1</sup> KNO<sub>3</sub>, N2: 300 kg ha<sup>−1</sup> KNO<sub>3</sub>, N3: 150 kg ha<sup>−1</sup> (NH<sub>4</sub>)<sub>2</sub>SO<sub>4</sub>, N4: 300 kg ha<sup>−1</sup> (NH<sub>4</sub>)<sub>2</sub>SO<sub>4</sub>.</p>
Full article ">Figure 4
<p>Relationship between the total dry weight of the potato plants and A<sub>t</sub> (<b>a</b>), and A<sub>t</sub> × P<sub>n</sub> (<b>b</b>). A<sub>t</sub>: area under the canopy curve; P<sub>n</sub>: the net photosynthetic rate.</p>
Full article ">Figure 5
<p>Relationships between the net photosynthetic rate (P<sub>n</sub>), area under the canopy curve (A<sub>t</sub>), and total dry weight (TDW). Solid lines indicate the correlation coefficients between the two variables, and dashed lines indicate the path coefficients from the predictor variables to the response variable. ** indicates significant differences at the 0.01 level.</p>
Full article ">Figure 6
<p>Pearson’s correlation coefficient matrix of yield and model parameters for all experimental plots. V<sub>max</sub>: maximum canopy cover; t<sub>m1</sub>: inflection point during canopy expansion; t<sub>1</sub>: duration of canopy expansion; t<sub>2</sub>: onset of canopy senescence; t<sub>e</sub>: the time point when canopy cover is zero; C<sub>m1</sub>: maximum growth rate during canopy expansion; t<sub>2</sub>−t<sub>1</sub>: duration of maximum canopy cover; t<sub>e</sub>−t<sub>2</sub>: duration of canopy senescence; A<sub>1</sub>, A<sub>2,</sub> A<sub>3</sub>: areas under individual curve segments; A<sub>sum</sub>: integrated ground cover. A<sub>tI</sub>, A<sub>tII</sub>, A<sub>tIII</sub>, and A<sub>tIV</sub>: area under the canopy curve for the budding, peak flowering, final flowering, stem and leaf senescence stages, respectively.</p>
Full article ">Figure 7
<p>Comparison of yield correlations with model parameters for two varieties. V1: ‘Shapody’; V2: ‘Zhongshu 18’. V<sub>max</sub>: maximum canopy cover; t<sub>m1</sub>: inflection point during canopy expansion; t<sub>1</sub>: duration of canopy expansion; t<sub>2</sub>: onset of canopy senescence; t<sub>e</sub>: the time point when canopy cover is zero; C<sub>m1</sub>: maximum growth rate during canopy expansion; t<sub>2</sub>−t<sub>1</sub>: duration of maximum canopy cover; t<sub>e</sub>−t<sub>2</sub>: duration of canopy senescence; A<sub>1</sub>, A<sub>2,</sub> A<sub>3</sub>: areas under individual curve segments; A<sub>sum</sub>: integrated ground cover. A<sub>tI</sub>, A<sub>tII</sub>, A<sub>tIII</sub>, and A<sub>tIV</sub>: area under the canopy curve for the budding, peak flowering, final flowering, stem and leaf senescence stages, respectively. ** represents a significant correlation at the 0.01 level.</p>
Full article ">
Back to TopTop