Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Forecasting Regional Sugarcane Yield Based on Time Integral and Spatial Aggregation of MODIS NDVI
Previous Article in Journal
SAR Images Statistical Modeling and Classification Based on the Mixture of Alpha-Stable Distributions
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Visualizing and Quantifying Vineyard Canopy LAI Using an Unmanned Aerial Vehicle (UAV) Collected High Density Structure from Motion Point Cloud

by
Adam J. Mathews
* and
Jennifer L. R. Jensen
Department of Geography, Texas State University-San Marcos, 601 University Drive, San Marcos, TX 78666, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2013, 5(5), 2164-2183; https://doi.org/10.3390/rs5052164
Submission received: 7 March 2013 / Revised: 24 April 2013 / Accepted: 26 April 2013 / Published: 7 May 2013
Graphical abstract
">

<p>The Texas Hill Country American Viticultural Area (THAVA) located in central Texas, west of Austin and northwest of San Antonio. THCAVA wineries are clustered in the eastern portions of the vast viticultural area.</p> ">

<p>The study vineyard blocks, located in the Texas Hill Country American Viticultural Area, are shown outlined in red. The dashed line separates the study blocks. The western block is significantly younger than the eastern block, leading to desirable variation in vine canopy structure and density across the site.</p> ">

<p>The filtered point cloud of ground (gray) and non-ground (orange) points. The SfM method provides accurate visualizations of the study site with the vine rows in the foreground as well as a fence and taller trees in the background.</p> ">

<p>A comparison of an actual UAV captured image (<b>a</b>) and the filtered point cloud (<b>b</b>) for the same area. In (b), ground points are gray and non-ground points are orange. For both (a) and (b), the extent of the study vineyard is shown with a red outline.</p> ">

<p>The density of points (local average number of points per square meter) for both the ground and non-ground is highest in the central and western portions of the vineyard block where the most overlap in UAV images occurred.</p> ">

<p>Three-dimensional visualization of the study vineyard (<b>a</b>–<b>c</b>) including sample vines (red poles) with highlighted sample rows, non-ground point cloud (green spheres), projected trellis wiring (gray lines), and underlying DTM surface. (a) Whole vineyard scale showing GCPs as red squares with inner white circles. (b) Partial-vineyard scale showing the clustering of points representing the individual vine row canopies. (c) Per-vine scale highlights the extraction zones for point inclusion/exclusion (transparent brown) and actual points (green).</p> ">

<p>Three-dimensional visualization of the study vineyard (<b>a</b>–<b>c</b>) including sample vines (red poles) with highlighted sample rows, non-ground point cloud (green spheres), projected trellis wiring (gray lines), and underlying DTM surface. (a) Whole vineyard scale showing GCPs as red squares with inner white circles. (b) Partial-vineyard scale showing the clustering of points representing the individual vine row canopies. (c) Per-vine scale highlights the extraction zones for point inclusion/exclusion (transparent brown) and actual points (green).</p> ">

<p>Canopy density (measured LAI) across the study vineyard blocks. The eastern portion of the vineyard is notably denser, which relates to more mature vines being located there.</p> ">

<p>Scatterplot of LAI predicted with SfM height metrics (Y-axis) and field-measured LAI (X-axis). The line black line indicates the regression fit while the gray line indicates a one-to-one relationship between observed and predicted LAI.</p> ">
Versions Notes

Abstract

:
This study explores the use of structure from motion (SfM), a computer vision technique, to model vine canopy structure at a study vineyard in the Texas Hill Country. Using an unmanned aerial vehicle (UAV) and a digital camera, 201 aerial images (nadir and oblique) were collected and used to create a SfM point cloud. All points were classified as ground or non-ground points. Non-ground points, presumably representing vegetation and other above ground objects, were used to create visualizations of the study vineyard blocks. Further, the relationship between non-ground points in close proximity to 67 sample vines and collected leaf area index (LAI) measurements for those same vines was also explored. Points near sampled vines were extracted from which several metrics were calculated and input into a stepwise regression model to attempt to predict LAI. This analysis resulted in a moderate R2 value of 0.567, accounting for 57 percent of the variation of LAISQRT using six predictor variables. These results provide further justification for SfM datasets to provide three-dimensional datasets necessary for vegetation structure visualization and biophysical modeling over areas of smaller extent. Additionally, SfM datasets can provide an increased temporal resolution compared to traditional three-dimensional datasets like those captured by light detection and ranging (lidar).

Graphical Abstract">

Graphical Abstract

1. Introduction

Identification of spatial variation in leaf canopy density is important in crop management and for accurate biomass estimation. Within viticulture specifically, being able to recognize such disparities provides vineyard managers the opportunity to examine and address this spatial variability by adjusting the management scheme with the potential of improving the crop [1]. Vine canopy density is vital in protection and production of high quality winegrapes. Moderate canopy density is typically desired, depending on the time of the growing season, specific location, and grapevine varietal [2]. Passive remote sensing datasets like aerial and satellite-based imagery of vineyard canopy can successfully identify such variability in canopy density and subsequent crop health within vineyard blocks [36]. Calculated vegetation indices, namely the normalized difference vegetation index (NDVI; [7]), highly correlate with changes in canopy density measured by leaf area index (LAI; ratio of leaf surface area to ground surface area following [8]). More recently, other datasets, like those provided by active remote sensors, are beginning to play a role in such viticultural research.
Unlike imagery, both terrestrial and airborne discrete return light detection and ranging (lidar) systems provide an additional third dimension of information (Z) for height and volumetric analysis. Terrestrial lidar has been successfully implemented to explore biophysical properties of vines [913]. Keightley et al. [10] measured uprooted grapevine trunk biomass with a stationary terrestrial lidar scanner. Rosell et al. [9] utilized a tractor-mounted lidar sensor to create three-dimensional (3D) scenes of vineyards and fruit orchards. These lidar data were found to be strongly correlated with field measurements and therefore were highly accurate when used to portray the entire crop structure (trunks, canopy, and trellis systems if present). Similarly, Llorens et al. [12] generated whole vineyard 3D canopy structure maps with a lidar sensor mounted on a tractor while moving between vine rows. Llorens et al. [11] modeled leaf area and accurately gauged ideal pesticide amounts for vineyards and orchards. Sanz-Cortiella et al. [13] used a tractor-mounted lidar system to study pear tree leaf density and found that the sensor provided an accurate 3D representation of leaf area but was highly affected by the height and angle of the sensor. Rosell et al. [9] suggested that lidar data may be used to explore relationships with LAI. Llorens et al. [12], in turn, reported a moderate, positive correlation between number of lidar returns and measured LAI of a given portion of canopy. Similarly, high total leaf area of juvenile trees has been shown to directly correlate with point density of the terrestrial lidar point cloud [14]. In all of these cases, collected terrestrial-based lidar point clouds exist in a Cartesian coordinate system requiring a highly accurate location tracking global positioning system (GPS) mounted on the lidar sensor platform (tractor or otherwise) for proper georectification [12].
To a much lesser extent, airborne lidar datasets have proven useful in visualization of vine canopy and vineyard structure leading to accurate delineation of vineyard parcels [15]. Although not specifically applied to viticulture, airborne lidar datasets can confidently predict LAI and other biophysical characteristics of tree vegetation by calculating several height-based metrics [1619]. Yet another method, that of statistically-based modeling, was implemented by [20] to look at single vine canopy and explore potential light interception for different grapevine varietals. For sake of practicality and cost though, airborne and terrestrial lidar datasets have proven difficult to acquire [21] and repeat acquisitions are usually cost-prohibitive. Due to this, alternative ways to gather similar datasets have emerged like Structure from Motion (SfM) [22]. Most recently, successful vineyard canopy modeling has been completed by way of SfM primarily for visualization [23,24]. Across an entire vineyard, Turner et al. [23] compared a pre-growth and full-growth point cloud of vineyard canopy in natural color by way of SfM. At a more reduced scale, another SfM-based vineyard analysis accurately classified vine structures (grapes, canopy, trellis and other hardware) along portions of a vine row [24].
SfM is a computer vision technique based heavily on the principles of photogrammetry wherein a significant number of photographs taken from different, overlapping perspectives are combined to recreate an environment (keypoint matching of features across images). SfM stems from a number of works, namely that of [25,26], which documents the development of the Bundler algorithm that is now employed by the most well-known SfM platform: Microsoft PhotoSynth. Although SfM was first intended to be used for ground-based applications, it has been used from aerial platforms and for geographic applications [22,2733]. For use in such geographic applications, the SfM output, which is made up of an internally consistent arbitrary coordinate system, must be transformed to real-world coordinates. Accordingly, georeferenced SfM datasets are similar to lidar datasets consisting of a set of data points, the keypoints generated from SfM product creation, with X, Y, and Z information (known as a point cloud in its entirety) with additional color information (red, green and blue [RGB] spectral) from the photographs. The cost to collect SfM point clouds remains very low compared to lidar; hence, there exists great interest in using such methods to model in 3D.
SfM-based 3D models have been used extensively in recreating urban and cultural features [26,27,31,34], and to a lesser extent topography and other surface features [30,33] such as vegetation [23,28]. The accuracy of the SfM approach, however, is often less trusted than other similar datasets provided by airborne or terrestrial lidar systems. Despite this, a number of research results insist that SfM point clouds are in fact comparable if not more accurate than lidar point clouds [22,28,33]. Unfortunately, comparison of such datasets is difficult unless both datasets are collected for the same research purpose and at similar point densities.
The SfM approach with vegetation has proven more difficult than with urban and other features because of their more complex and discontinuous structures [21,28]. Keypoint matching is considerably more difficult when working with vegetative features because of leaf gaps, repeating structures of the same color, and inconsistent/random geometries. The resulting SfM point cloud can therefore be more random and less uniform in its spatial coverage [28]. Despite this, satisfactory results of vegetation modeling (canopy height) with SfM have been reported [28]. Placement of colored field markers, modification of the SfM algorithms, increasing the number of photographs captured, and taking images at higher altitudes were just a few of the suggestions provided to improve vegetation modeling when implementing the SfM approach [28].
Besides [23,24], no studies have reported specifically modeling vineyard vegetation with SfM. More importantly, no studies have explored the relationship between SfM point clouds and in situ LAI measurements as have been explored with lidar data. Consequently, this study uses SfM to create a 3D vineyard point cloud to visualize vineyard vegetation as well as attempt to predict vine LAI based on information derived from the created SfM point cloud. A number of metrics are calculated with extracted points from the SfM point cloud that are compared to LAI measurements to explore how LAI relates to said metrics (point heights, number of points, etc.).

2. Materials and Methods

2.1. Study Site

The Texas Hill Country American Viticultural Area (THCAVA) was officially recognized in 1991 and is located in central Texas west of Austin and north of San Antonio (Figure 1). This viticultural area contains 22 wineries, encompasses parts of 22 counties, and covers an area of over 36,000 square kilometers (14,000 square miles). This study looked at two contiguous vineyard blocks managed by one winery within the THCAVA. These two blocks of trellis-trained Tempranillo (Vitis vinifera) vines are shown outlined in red in Figure 2 and total approximately 1.9 ha (4.8 acres). Within this outlined area, the eastern block (separated by the dashed red line) was planted five years prior to the western block. Due to this, vines in the western block have significantly younger, smaller canopies than those in the eastern block. Both blocks are included to provide obvious leaf canopy size and density variation throughout the study site to enhance the robustness of the model results. All of the study vines are between five and fifteen years of age. The vines within the block immediately west of the highlighted study vines are not included because they are even less mature and are a different varietal. In total, the study blocks include 39 rows of vines with approximately 70–90 vines per row (around 3,000 vines total). The precise location of the study vineyard within the THCAVA is not disclosed as requested by the property owners.

2.2. Data Collection

Point cloud creation and 3D modeling was completed using the SfM approach discussed in Section 1. This method provides a low cost alternative to generate 3D data similar to lidar data and for sake of practicality remains a highly replicable method for future studies. Data were collected during the veraison phenological phase of the growing season (nearly 100% or full veraison) following [35]. This phenological phase was chosen for modeling because during phases prior to this the canopy may not be fully developed, while phases after this may be highly affected by canopy management practices like leaf thinning [2]. Additionally, observations from this part of the growing season have been shown to highly correlate with eventual vine performance in studies using multispectral imagery [4,6].
Over 200 images were taken of the study vineyard with an unaltered, off-the-shelf Canon PowerShot A480 (RGB) digital camera. Images were captured with the use of a remote controlled Hawkeye II unmanned aerial vehicle (UAV) system ( www.ElectricFlights.com, Kingsland, TX, USA). The camera was mounted in the UAV facing downward for nadir capture. This kitewing plane UAV platform was flown in vine row direction (north-to-south) for multiple passes to collect the imagery. This flight path was employed because the study UAV flies in a more stable fashion when flown directly into (to the south) or with (to the north) the wind. Flying height ranged from 100 to 200 m above ground. The Canon Hackers Development Kit (CHDK; chdk.wikia.com) intervalometer script was employed to continuously capture images every second during flight. Images not captured within this altitude threshold (at or near takeoff, landing, and during ascent/descent) were not included in image processing and are not reported in Table 1. Images were captured on a cloud free day (16 June 2012) around 11:00 am to minimize the effect of shadowing between the vines rows. The UAV captured images at nadir and varying oblique angles. Oblique images were captured both unintentionally, during side-to-side UAV movement caused by crosswinds, and intentionally, by way of banking (leaning) the aircraft. Oblique images were included to better gain a sense of and model the canopy in 3D, instead of inputting only nadir images. Increased number of differing angles/perspectives with overlap only serves to further improve the SfM end product [28].
The difficulty of SfM keypoint matching with vegetation has been noted [28] due to the uniform and repeating nature of the surface area being modeled. Leaves can also be shiny due to their wax-like nature, further deterring proper keypoint matching. To address this and aid efficient SfM product creation, a number of colored targets were placed in the field prior to image capture. Nine, 25-cm wide plastic buckets (pails) were placed in random locations upside-down atop trellis support posts throughout the vineyard. These adornments did not touch the vine canopy growing on the trellis below. The buckets were painted several different flat (non-shiny) colors (orange, yellow, white, and gray) to provide added visual distinctness from the surrounding canopy (green), repeating trellis structure (black), and underlying soil (red/brown). The discreteness of these targets within the vineyard landscape provides cursory SfM keypoints from which further keypoints can more easily be generated. This is assumed to create a more accurate SfM model overall as well as potentially reduce processing time. In total, nine buckets were considered enough to aid in cursory keypoint matching, although best practices of employing such aids has yet to be tested in SfM studies.
To properly georeference the SfM point cloud, five ground control points (GCPs) were accurately located using a Trimble GeoXH GPS with an external antenna averaging a total of 200 separate GPS positions for each location (X,Y: NAD 1983, UTM Zone 14N; and Z: NADV88). GPS acquisition was limited to a maximum position dilution of precision (PDOP) of three. Differential correction of the collected GPS positions was completed using the Trimble GPS Analyst Extension in ArcGIS and resulted in a mean estimated error of 0.1083 m. GCP targets were crafted out of sturdy foamboard, sized 0.6 m by 0.6 m, and painted red with white and black center targets following [36]. This ensures proper identification within the resulting point cloud model. The GCP targets were designed with further distinct colors to additionally aid in SfM keypoint matching much like the previously discussed colored buckets.
LAI data were collected using an AccuPAR LP-80 ceptometer (Decagon Devices, Pullman, WA, USA) immediately following UAV image capture for improved accuracy with higher sun angles [37]. Similar to [8], offset stratified sampling was implemented consisting of every tenth vine in every fifth row starting with the easternmost row (alternating between the first and fifth vine to begin each sample row starting from the north). A total of 67 vines were sampled for LAI measurements. Ceptometer measurements were taken directly beneath the central portion of the vine canopy beside the vine trunk in a perpendicular fashion to the vine canopy row similar to the accurate measurement protocol M3 reported by [37]. All sample vines were GPS located based on averaging 30 positions rather than 200 as was employed with the GCPs (mean estimated error of 0.1660 m). Following differential correction of the captured GPS positions, collected LAI information for each vine was attributed to their respective locations.

2.3. Data Processing

The 3D vineyard point cloud was created automatically using Agisoft PhotoScan (Agisoft LLC, St. Petersburg, Russia). It should be noted that manual processing by way of open source software is also possible, but remains more time consuming [28,31,33]. Of the total 206 images captured, 201 were input into the model. Five images were not included because they were either overly blurry or did not capture the study vineyard within the field of view. The latter was due to the UAV having to turn around at the end of the flight line. Such images can potentially introduce noise into the model as well as slow processing time (more images; blurry images are more difficult to identify matching keypoints across images). PhotoScan, much like Microsoft PhotoSynth (Microsoft Corporation, Seattle, WA, USA), automatically generated the 3D model based on input images. This model was then manually georeferenced within PhotoScan by way of identifying the GCPs within the model and substituting those data point’s arbitrary coordinates with the GPS measured coordinates and applying this locational transformation to the entire point cloud. The georeferenced point cloud was then exported using the high point density setting to LAS file format.
Point cloud processing was completed using LP360 (QCoherent Software LLC, Madison, AL, USA). Manual removal of noise within the point cloud was first necessary to remove obvious outliers not representing actual ground features (points 0.5 m or greater beneath the ground surface and points 10 m above the ground surface based on field observation). The spatial extent of the dataset was also reduced to decrease processing time. Points greater than 70 m away from the study block outline were removed from the dataset. Following this manual effort, the point cloud was processed similar to that of a lidar dataset with automatic point filtering to separate ground points from non-ground points. LP360 uses an adaptive TIN method to first approximate a terrain surface using the lowest elevations in a large grid and then iteratively refines the surface until an accurate representation of bare earth points is achieved [38]. After automatic classification, additional manual classification was necessary to reassign obvious misclassified points to their proper class for improved ground—non-ground separation.
The classified LAS file was imported into ArcGIS (ESRI, Redlands, CA, USA) as separate vector files of ground and non-ground points. Ground points were used to create a very high spatial resolution (0.25 m) digital terrain model (DTM) using ordinary kriging. To create relative height of non-ground points, the DTM height was subtracted from the absolute height of each non-ground point. This resulted in meaningful heights for each point in the non-ground point cloud representing measurements from the ground surface rather than from sea-level.

2.4. Data Analysis

Non-ground points were extracted based on proximity to the LAI sampled vines. Points were extracted using rectangles sized 1 m by 2 m, centered on sample vine trunk locations, and orientated north-to-south in line with the vine rows. Even though LAI measurements were only taken at the central base of each vine, these extraction zones are most representative of the full canopy of each vine, more so than a circular buffer around the vine location would be. This is due to the trellis system onto which the vines are trained to grow and the inherent geometry. Vine-to-vine spacing within each row is 2 m. Therefore, a meter to the north and the south of each vine trunk location represents canopy from that particular vine. Likewise, canopy width is no greater than a meter wide (east-to-west), which provides enough space to include the entire canopy.
Following extraction of non-ground points to each sampled LAI vine, several metrics were calculated to explore correlations with the LAI measurements following [17,18]. These metrics include the count or number of points within each vine’s zone as well as several height-based metrics such as mean height, variance, standard deviation, coefficient of variation, kurtosis, percentiles (10th, 20th, 30th, 40th, 50th, 60th, 70th, 80th, 90th, 100th), percentile differences (100th–10th, 90th–20th, 100th–50th), and percentile point ratios (i.e., number of points above percentile heights relative to the total number of points within the extraction zone). Furthermore, points with heights below 0.3 m and above 2.3 m were not included in metric calculation. These points were excluded based on field observation, which determined that points at these heights could not represent vine canopy. The relationship between these metrics and the measured LAI were modeled using the All Possible Models (i.e., best subsets regression) function in the JMP statistical package (SAS Institute, Cary, NC, USA).
A square root transformation (LAISQRT) was applied to the field-measured LAI data to meet the assumption of data normality (minimize skewness and kurtosis). LAISQRT served as the dependent variable (Y) while the height-based metrics served as the independent or explanatory variables (X’s). Sample vines with low point counts of less than six were excluded from analysis. This condition was imposed based on prior consultation with an expert statistician to determine the minimum number of points necessary for reliable metric calculation (i.e., variance, coefficient of variation, etc.). As such, the total number of observations included in the regression analysis was limited to 44 of the original 67 vines. Correlation analyses between each metric and LAISQRT were performed but yielded weak results for individual metrics (−0.3372 to 0.3941). However, as is common in lidar-based analyses, 3D-dervied LAI estimates typically require several predictor variables to accurately quantify structural characteristics such as LAI. In that regard, weak individual correlations were not viewed as a limitation for further analysis. The All Possible Models procedure was implemented to provide a range of one-to-six covariate term models. Candidate models were selected based on a several criteria including R2, adjusted R2, RMSE, individual covariates, and overall model significance (α ≤ 0.05). The candidate models were also subjected to a Predicted Residual Sum of Squares (PRESS) analysis, which was used to determine the prediction error of each candidate model. The candidate model with the smallest difference in model root mean square error (RMSE) to PRESS RMSE was selected as the final model.

3. Results

3.1. SfM Results and Point Cloud Visualization

Characteristics of the vineyard SfM model are reported in Table 1. Of the 206 total images captured by the digital camera during UAV flight, five were not input into the SfM model. From the 201 input images, PhotoScan exported a point cloud with a total of 462,959 points. After removal of noise within the dataset, a total of 432,184 points remained (93.3% of original), of which 333,835 points were classified as ground (72.1% of original) and the remaining 98,349 points as non-ground (21.2% of original). Typically, points flagged and removed as noise were located much higher or lower than expected and did not represent any actual feature on the ground or within the vine canopy. The extent of the dataset was also reduced to decrease processing time and the points located outside of the clipped extent were also classified as noise, comprising 6.7% of the original output point cloud.
The filtered point cloud provides a clear 3D visualization of the study site as shown in Figure 3 at an oblique angle looking north. Points classified as ground are shown in gray, while points classified as non-ground are shown in orange. The background, and therefore any area not being covered by data points, is displayed in black. The vine canopy with its distinct trellis-trained rows quickly becomes evident, especially at the vineyard edges where features are not found aboveground. Likewise, taller objects like the trees to the north of the study vineyard are properly replicated by the SfM model.
For further depiction of the filtering results, a nadir view of an actual UAV captured image (Figure 4(a)) and the classified point cloud of the same extent (Figure 4(b)) are shown in Figure 4. This is the northwesternmost portion of the study vineyard, the extent of which is denoted with a red outline in Figure 4(b). The repeating linear structure of the vine canopy is again apparent in this case. Other objects on the ground are also well classified such as the vehicle in the upper-right of Figure 4(a). The points representing the vehicle are correctly classified as non-ground. Likewise, other features like the fence enclosing the vineyard (between the vehicle and the start of the vine rows) and the building northwest of the vehicle are also captured by the point filtering as non-ground. The non-ground points representing vine canopy though, are patchy and less uniform than found in other areas of the vineyard (refer back to Figure 3). This is not due to misclassification but rather less abundance of points in this area.
Due to the overlap of the UAV images input into the SfM model, the point density across the study vineyard is not uniform. Figure 5 shows both ground point density (left) and non-ground point density (right). Overall, ground points have a higher density than non-ground points because many more points are classified as ground than non-ground (333,835 vs. 98,349). The spatial pattern of high point density though, remains similar across both sets of points. Both display high densities of points over the central and western sections of the study vineyard. This is where the most overlap in the input images occurred and results in more keypoints within those areas. Additionally, the northern edge of the vineyard shows higher degrees of point density because this was the staging area from which the UAV was launched and landed, resulting in more images being taken at the north end of the vineyard compared to the south. Mean point densities within the study vineyard block (the extent as shown in Figure 5) resulted as follows: 9.07 points per square meter for all points (unclassified), 6.33 points per square meter for ground classified points, and 2.74 points per square meter for non-ground classified points.
Despite spatial variation in point density, the SfM point cloud can be used to create powerful visualizations of the study vineyard at a number of scales (see Figure 6(a–c)) including whole vineyard (Figure 6(a)), partial vineyard or vine row (Figure 6(b)), and partial vine row or per vine (Figure 6(c)). The three-dimensional perspective provided in this case increases the ability to perceive the SfM representation of the vine canopy and the density of the non-ground points. This is especially the case with Figure 6(b) in which the low angle provides a view similar to that of standing at the study site looking down the vine rows. Elements added for visual effect include: colored DTM surface, lines representing trellis wires, red poles with cone bases to represent the sampled vine trunk locations and generalized vine heights, red transparent partitions to highlight the sample vine rows, and extraction rectangles (boxes). The 1 m by 2 m extraction rectangles exist within 0.3 to 2.3 m above the DTM surface and are displayed in a transparent brown hue only in Figure 6(c). All of the points within these shapes were extracted and attached to that particular LAI sampled vine.
Ceptometer collected LAI data throughout the study vineyard resulted in a large range of values from 0.54 to 5.65. Indeed, a large variation in canopy density across the study site existed, which may be attributed to ceptometer uncertainty [37,39,40]. The spatial distribution of these collected values are interpolated and shown with Figure 7. This figure confirms the previously mentioned east-west block differentiation in canopy density based on age of the vines where the more established vines to the east have larger, denser canopies while the vine canopies to the west are considerably smaller and less dense. The location of the stratified sample of 67 vines is also shown along with the vine row structure. Spacing of the sample points appears to be less uniform in the western block (greater distance between sample points); this is due to removal of vines in the western block, especially noticeable in the last sampled row furthest west.

3.2. Relationship between SfM Output and LAI

In total, 44 of the 67 sampled vines had point counts of six or greater. These 44 observations (n) were used to evaluate the relationship between the SfM height metrics and field-measured LAI (Table 2). The final model was selected based on the best subset of covariates and explained 57% of the variation in field-based measures with an RMSE of 0.24. The six covariate terms used to predict LAISQRT include the variance (Var), coefficient of variation (CV), the 50th and 90th percentile heights (Per5 and Per9), the difference between the 100th percentile height and the 50th percentile height (Per10-5), and the ratio of the number of points above the 60th percentile to the total number of points within the extraction zone (RatioPer6). A summary of the parameter estimates and overall model performance is provided in Table 2.
The regression results are shown graphically in Figure 8 with observed LAI on the X-axis and predicted LAI on the Y-axis. Estimates were back-transformed to LAI and the observed vs. predicted values are shown around a one-to-one relationship line (gray) and a regression fit trend line (black).

4. Discussion

4.1. General Study Limitations

This study presents preliminary findings using SfM to visualize vine canopy and predict LAI. The scope of this study remains limited to a one-time data acquisition in July 2012 at a single vineyard site in the THCAVA. The potential utility of the presented method, therefore, remains limited to this dataset. Due to the high variation of SfM output (i.e., point density) based on image input, it is highly likely that the captured image data influences the success with which this method is employed. In general, comparative SfM studies exploring the degree to which SfM models vary in recreating the same subject at or near the same time period would be very useful. Specifically, further SfM-based viticultural research would benefit greatly from replicating such analyses over several data acquisitions within the growing season, over several years, and across several vineyard sites. At that point, the robustness of this method can fully be recognized. Continued success and advancement of this method may also lead to exploration of differences between grapevine varietals and management practices (more or less leaf thinning).
Though LAI measurements over the study area were obtained in a manner to mitigate potential error sources, error may have been introduced in the LAI data acquisition either through the sampling strategy or measurement theory. For example, the ceptometer requires specific parameters that can influence how LAI is calculated by the device including illumination conditions and leaf angle distribution [40]. Slight changes in illumination conditions throughout the measurement period can also influence LAI measurements because the device requires information on the total to direct flux [40]. Work [37] reported that the best results from a SunScan ceptometer were obtained under very specific illumination conditions, namely when the sun was neither directly overhead nor parallel to the vine rows. Further, even though the number of PAR sensors was limited to account for the relatively narrow width of the vine rows, the physical footprint of the sensor likely varied despite best efforts to position the sensor exactly the same for all measurement locations. Lastly, LAI measurements are likely influenced by the trellis system itself, as the wooden components and wires influence light interception.

4.2. SfM as an Alternative Source of High-Density 3D Data

The low cost and relative ease of creating 3D visualizations by way of SfM will likely see an expansion of use within the coming years. Inputting 201 images to PhotoScan, with relatively little user input, resulted in a dense (unclassified) point cloud with a mean of 9.07 points per square meter for use in visualization and analysis. A relative disadvantage of the SfM method of creating 3D datasets, however, is the random nature of SfM-obtained points within the output point cloud, since points can only be assigned based on conjugate feature recognition, or the ability of the matching algorithm to identify similar features in two or more images. Spatial variation in point cloud density is likely to occur when creating SfM-based models even with careful planning of image capture. As Figure 5 illustrates, higher point densities tended to result from increased overlap between images. As such, a potential solution would be to ensure that the entire study area be redundantly imaged. In short, to minimize spatial variation in point density across a study area, UAVs equipped with autopilot and flight planning functionality could be programmed with automated image capture to ensure more equal study site coverage. This includes obtaining a great deal of overlap of images at the edges of the study site, which can be obtained by buffering the desired coverage area by a generous distance.
SfM-acquired topographic datasets [33] were comparable to airborne lidar data in terms of point densities and horizontal and vertical precision. For vegetation-based studies, such as those presented here, the ability to image the ground surface due to the trellis-row nature of the vineyard provided an opportunity to filter ground from non-ground points and generate SfM-derived terrain and canopy datasets. This may not always be the case, especially in areas of high canopy density, where the ground is not visible to the passive imaging system. However, if lidar data are available for an area, the SfM technique may be used to acquire vegetation canopy information such as height or percent cover as long as the lidar data are used to model the bare ground [28].
As it currently stands, multi-temporal lidar acquisitions are not economically feasible for fine/small scale acquisitions; however, a simple, low cost aerial camera system can be easily configured to provide similar information and more frequently than commissioning a lidar acquisition. Nonetheless, we explicitly state, that unlike lidar, the accuracy of the projected point coordinates are dependent on the geometric transformation between field-measured GPS positions and clearly identifiable features in the imagery and point density is variable and dependent on image overlap and conjugate surface features. Future research in SfM-acquisition for 3D datasets would certainly benefit from guidance regarding image acquisition and spatial redundancy as well as optimal placement of GCPs to distribute transformation uncertainties equally throughout the study area as this may affect location (X,Y) and height (Z) accuracies.

4.3. SfM LAI Estimates Compared to Lidar and Spectral-Based Approaches

The results of this study are similar to accuracies reported from other vineyard site lidar-based LAI estimation. For example, [12] was able to account for 49% variability of field-measured LAI using the number of lidar returns from a tractor-mounted terrestrial lidar. Llorens et al. [11] achieved a maximum R2 value of 0.40 for a regression model that used the number of lidar canopy returns acquired from a tractor-mounted terrestrial lidar. An exception to other lidar-based approaches is [41], who achieved exceptional R2 values of up to 0.99 using a tree area index metric derived from very high density terrestrial lidar scanner data.
Passive optical imagery acquired by either aerial or satellite-based platforms has been the traditional data used to estimate LAI and has, with the exception of [41] produced better results than lidar for vineyard canopy LAI. For example, Johnson [8] used 4 m multispectral IKONOS imagery to calculate predictor variables based on NDVI and accounted for 91 to 98% variability in field-measured LAI over four different measurement dates. In another study, Johnson [6] used NDVI derived from IKONOS imagery and was able to account for 72% of field-measured LAI. Using 0.25 m multispectral aerial imagery and a NDVI threshold of 0.6, Hall et al. [42] calculated planimetric vine canopy area that accounted for 83% of the variability of LAI over several phenological stages.

4.4. Potential of SfM as a Source of 3D Data for LAI Estimation

Although the results of our SfM-derived LAI model only explained a moderate percent of variation in field-measured LAI (R2 = 0.567), these results provide proof-of-concept in that SfM data, due to its similarity with lidar may be used to predict LAI for vineyards. However, several issues provide discussion points in terms of the SfM point densities. As mentioned previously, low point densities in portions of the vineyard were an issue that quickly became apparent during metric calculation of point- and height-based regression covariates. Out of 67 LAI-sampled vines, 23 field-measured LAI observations were excluded from regression modeling because the extraction zones surrounding the vines failed to have more than five points. Higher density of points may have led to an improved R2 value between LAI and the SfM metrics, although we acknowledge that this relationship is not evident in the current modeled output.
Our regression analysis included a series of point- and height-based metrics. Additional research could address the calculation and implementation of more traditional lidar-derived metrics used to predict LAI such as the laser penetration metric [43], laser penetration index, or the laser interception index [44]. Lastly, higher point density may allow for analysis of points within the X, Y axis. Analysis within this plane at designated height profiles could lead to accurate estimates of vine canopy width and overall size. A metric like this could further bolster the regression model that currently is primarily based on height-based metrics. Such analysis was attempted in this study but was unsuccessful presumably due to the lack of points.

5. Conclusions

This study presented several visualizations of vine canopy from the whole vineyard to single vine scale based on a SfM-derived point cloud. This generated model of vine canopy was created by capturing 201 aerial photographs with a digital camera mounted on a kitewing UAV. The SfM point cloud was then classified as ground and non-ground with non-ground points representing vegetation. This method was successful at quickly, practically, and inexpensively recreating the vineyard environment at the study site including the vine canopy. Using extracted points from this point cloud, this study reported moderate success in relating measured LAI of vine canopy to SfM point cloud derived metrics with an R2 value of 0.567.
More work utilizing this rapidly developing SfM-methodology is necessary. This is especially the case with vegetation related studies because of the added level of difficulty associated with it. At this stage, modeling vegetation with SfM remains highly experimental and only moderately successful as shown by this and other studies [28]. The reasonable success of this method in such an early stage provides hope that this technique can be improved upon. The practical and inexpensive nature of the SfM method of 3D modeling makes it highly attractive to researchers and practitioners within a variety of fields.
Future work using SfM for vegetation should employ colored targets to aid in keypoint matching. Likewise, higher point density is always desirable and can be obtained by acquiring more images, although this will prolong processing time to generate the point cloud. Implementation of this SfM method to predict LAI of other types of vegetation, particularly in forestry, would be worth exploring. SfM point clouds could also be utilized to estimate volumetric variables like biomass. Within the realm of viticulture, using this method at and between each phenological phase (budbreak, flowering, veraison, and harvest) to quickly generate whole vineyard 3D maps of vine growth both for visualization and LAI would be useful for vineyard managers wanting to assess spatial variation in size and density of vine canopy. It would be worth exploring potential variability in the prediction of LAI based on phenological phase, where fuller or lesser dense canopies may improve the accuracy of LAI prediction.

Acknowledgments

The authors wish to thank Joel Scholz for helping collect the image data used in this study. Additionally, thanks are extended to Andrew Hall (Charles Sturt University) for providing valuable input during the data analysis. The authors are also grateful for access to the study site granted by the property owners and further assistance provided by the vineyard manager, winemaker, and maintenance personnel.

References

  1. Proffitt, T.; Bramley, R.G.V.; Lamb, D.W.; Winter, E. Precision Viticulture: A New Era in Vineyard Management and Wine Production; Winetitles: Ashford, SA, Australia, 2006. [Google Scholar]
  2. Creasy, G.L.; Creasy, L.L. Crop Production Science in Horticulture 16: Grapes; CABI: Cambridge, UK, 2009. [Google Scholar]
  3. Hall, A.; Louis, J.; Lamb, D.W. Characterising and mapping vineyard canopy using high-spatial-resolution aerial multispectral images. Comput. Geosci. 2003, 23, 813–822. [Google Scholar]
  4. Hall, A.; Lamb, D.W.; Holzapfel, B.P.; Louis, J.P. Within-season temporal variation in correlations between vineyard canopy and winegrape composition and yield. Prec. Agr. 2011, 12, 103–117. [Google Scholar]
  5. Johnson, L.F.; Herwitz, S.; Dunagan, S.; Lobitz, B.; Sullivan, D.; Slye, R. Collection of Ultra High Spatial Resolution Image Data over California Vineyards with a Small UAV. Proceedings of the 30th International Symposium on Remote Sensing Environment, Honolulu, HI, USA, 10–14 November 2003.
  6. Johnson, L.F.; Roczen, D.E.; Youkhana, S.K.; Nemani, R.R.; Bosch, D.F. Mapping vineyard leaf area with multispectral satellite imagery. Comput. Electron. Agr. 2003, 38, 33–44. [Google Scholar]
  7. Rouse, J.W.; Haas, R.H.; Deering, D.W.; Schell, J.A.; Harlan, J.C. Monitoring Vegetation Systems in the Great Plains with ERTS. Proceedings of the 3rd Earth Resource Technology Satellite (ERTS) Symposium, Washington, DC, USA, 10–14 December 1973.
  8. Johnson, L.F. Temporal stability of an NDVI-LAI relationship in a Napa Valley vineyard. Aust. J. Grape Wine Res. 2003, 9, 96–101. [Google Scholar]
  9. Rosell, J.R.; Llorens, J.; Sanz, R.; Arno, J.; Ribes-Dasi, M.; Masip, J.; Escola, A.; Camp, F.; Solanelles, F.; Gracia, F.; et al. Obtaining the three-dimensional structure of tree orchards from remote 2D terrestrial LIDAR scanning. Agr. For. Meteorol. 2009, 149, 1505–1515. [Google Scholar]
  10. Keightley, K.E.; Bawden, G.W. 3D volumetric modeling of grapevine biomass using tripod LiDAR. Comput. Electron. Agr. 2010, 74, 305–312. [Google Scholar]
  11. Llorens, J.; Gil, E.; Llop, J.; Escola, A. Ultrasonic and LIDAR sensors for electronic canopy characterization in vineyards: advances to improve pesticide application methods. Sensors 2011, 11, 2177–2194. [Google Scholar]
  12. Llorens, J.; Gil, E.; Llop, J.; Queralto, M. Georeferenced LiDAR 3D vine plantation map generation. Sensors 2011, 11, 6237–6256. [Google Scholar]
  13. Sanz-Cortiella, R.; Llorens-Calveras, J.; Escola, A.; Arno-Satorra, J.; Ribes-Dasi, M.; Masip-Vilalta, J.; Camp, F.; Gracia-Aguila, F.; Solanelles-Batlle, F.; Planas-DeMarti, S.; et al. Innovative LIDAR 3D dynamic measurement system to estimate fruit-tree leaf area. Sensors 2011, 11, 5769–5791. [Google Scholar]
  14. Seidel, D.; Beyer, F.; Hertel, D.; Fleck, S.; Leuschner, C. 3D-laser scanning: a non-destructive method for studying above-ground biomass and growth of juvenile trees. Agr. For. Meteorol. 2011, 151, 1305–1311. [Google Scholar]
  15. Mathews, A.J.; Jensen, J.L.R. An airborne LiDAR-based methodology for vineyard parcel detection and delineation. Int. J. Remote Sens. 2012, 33, 5251–5267. [Google Scholar]
  16. Means, J.E.; Acker, S.A.; Fitt, B.J.; Renslow, M.; Emerson, L.; Hendrix, C.J. Predicting forest stand characteristics with airborne scanning Lidar. Photogramm. Eng. Remote Sensing 2000, 66, 1367–1371. [Google Scholar]
  17. Popescu, S.C.; Wynne, R.H.; Scrivani, J.A. Fusion of small-footprint and multispectral data to estimate plot-level volume and biomass in deciduous and pine forests in Virginia, USA. For. Sci. 2004, 50, 551–565. [Google Scholar]
  18. Jensen, J.L.R.; Humes, K.S.; Vierling, L.A.; Hudak, A.T. Discrete-return lidar-based prediction of leaf area index in two conifer forests. Remote Sens. Environ. 2008, 112, 3947–3957. [Google Scholar]
  19. Peduzzi, A.; Wynne, R.H.; Thomas, V.A.; Nelson, R.F.; Reis, J.J.; Sanford, M. Combined use of airborne Lidar and DBInSAR data to estimate LAI in temperate mixed forests. Remote Sens. 2012, 4, 1758–1780. [Google Scholar]
  20. Louarn, G.; Lecoeur, J.; Lebon, E. A three-dimensional statistical reconstruction model of grapevine (vitis vinifera) simulating canopy structure variability within and between cultivar/training system pairs. Ann. Bot. 2008, 101, 1167–1184. [Google Scholar]
  21. Omasa, K.; Hosoi, F.; Konishi, A. 3D Lidar imaging for detecting and understanding plant responses and canopy structure. J. Exp. Bot. 2007, 58, 881–898. [Google Scholar]
  22. Leberl, F.; Irschara, A.; Pock, T.; Meixner, P.; Gruber, M.; Scholz, S.; Weichert, A. Point clouds: Lidar versus 3D vision. Photogramm. Eng. Remote Sensing 2010, 76, 1123–1134. [Google Scholar]
  23. Turner, D.; Lucieer, A.; Watson, C. Development of an Unmanned Aerial Vehicle (UAV) for Hyper Resolution Mapping Based Visible, Multispectral, and Thermal Imagery. Proceedings of 34th International Symposium of Remote Sensing Environment, Sydney, NSW, Australia, 10–15 April 2011.
  24. Dey, A.; Mummet, L.; Sukthankar, R. Classification of Plant Structures from Uncalibrated Image Sequences. Proceedings of IEEE Workshop on Applications of Computer Vision, Breckenridge, CO, USA, 9–11 January 2012.
  25. Snavely, N. Scene Reconstruction and Visualization from Internet Photo Collections, 2008.
  26. Snavely, N.; Seitz, S.M.; Szeliski, R. Modeling the world from internet photo collections. Int. J. Comput. Vis. 2008, 80, 189–210. [Google Scholar]
  27. Kaminsky, R.S.; Snavely, N.; Seitz, S.T.; Szeliski, R. Alignment of 3D Point Clouds to Overhead Images. Proceedings of Second IEEE Workshop on Internet Vision, Miami, FL, USA, 20–25 June 2009.
  28. Dandois, J.P.; Ellis, E.C. Remote sensing of vegetation structure using computer vision. Remote Sens. 2010, 2, 1157–1176. [Google Scholar]
  29. Harwin, S.; Lucieer, A. Assessing the accuracy of georeferenced point clouds produced via multi-view stereopsis from unmanned aerial vehicle (UAV) imagery. Remote Sens. 2012, 4, 1573–1599. [Google Scholar]
  30. Rosnell, T.; Honkavaara, E. Point cloud generation from aerial image data acquired by a quadrocopter type micro unmanned aerial vehicle and a digital still camera. Sensors 2012, 12, 453–480. [Google Scholar]
  31. Mathews, A.J.; Jensen, J.L.R. Three-Dimensional Building Modeling Using Structure from Motion: Improving Model Results with Telescopic Pole Aerial Photography. Proceedings of 35th Applied Geography Conference, Minneapolis, MN, USA, 10–12 October 2012; pp. 98–107.
  32. Turner, D.; Lucieer, A.; Watson, C. An automated technique for generating georectified mosaics from ultra-high resolution unmanned aerial vehicle (UAV) imagery, based on structure from motion (SfM) point clouds. Remote Sens. 2012, 4, 1392–1410. [Google Scholar]
  33. Fonstad, M.A.; Dietrich, J.T.; Courville, B.C.; Jensen, J.L.; Carbonneau, P.E. Topographic structure from motion: a new development in photogrammetric measurement. Earth Surf. Proc. Landf. 2013. [Google Scholar] [CrossRef]
  34. Pollefeys, M.; Gool, L.V.; Vergauwen, M.; Verbiest, F.; Cornelis, K.; Tops, J. Visual modeling with a hand-held camera. Int. J. Comput. Vis. 2004, 59, 207–232. [Google Scholar]
  35. Stamatiadis, S.; Taskos, D.; Tsadila, E.; Christofides, C.; Tsadilas, C.; Schepers, J.S. Comparison of passive and active canopy sensors for the estimation of vine biomass production. Prec. Agr. 2010, 11, 306–315. [Google Scholar]
  36. Aber, J.S.; Marzoff, I.; Ries, J.B. Small-Format Aerial Photography: Principles, Techniques and Geosciences Applications; Elsevier: Oxford, UK, 2010. [Google Scholar]
  37. Lopez-Lozano, R.; Casterad, M.A. Comparison of different protocols for indirect measurement of leaf area index with ceptometers in vertically trained vineyards. Aust. J. Grape Wine Res. 2013, 19, 116–122. [Google Scholar]
  38. Axelsson, P. DEM generation from laser scanner data using adaptive TIN models. Int. Arch. Photogramm. Remote Sens. Spat. Sci. Inform. 2000, 33, 110–117. [Google Scholar]
  39. Hyer, E.J.; Goetz, S.J. Comparison and sensitivity analysis of instruments and radiometric methods for LAI estimation: Assessments from a boreal forest site. Agr. For. Meteorol. 2004, 122, 157–174. [Google Scholar]
  40. Garrigues, S.; Shabanov, N.V.; Swanson, K.; Morisette, J.T.; Baret, F. Intercomparison and sensitivity analysis of leaf area index retrievals from LAI-2000, AccuPar, and digital hemispherical photography over croplands. Agr. For. Meteorol. 2008, 148, 1193–1209. [Google Scholar]
  41. Arno, J.; Escola, A.; Valles, J.M.; Llorens, J.; Sanz, R.; Masip, J.; Palacin, J.; Rosell-Polo, J.R. Leaf area index estimation in vineyards using a ground-based LiDAR scanner. Prec. Agr. 2012. [Google Scholar] [CrossRef] [Green Version]
  42. Hall, A.; Louis, J.P.; Lamb, D.W. Low-resolution remotely sensed images of winegrape vineyards map spatial variability in planimetric canopy area instead of leaf area index. Aust. J. Grape Wine Res. 2008, 14, 9–17. [Google Scholar]
  43. Zhao, K.; Popescu, S.C. Lidar-based mapping of leaf area index and its use for validating GLOBCARBON satellite LAI product in a temperate forest of the southern USA. Remote Sens. Environ. 2009, 113, 1628–1645. [Google Scholar]
  44. Barilotti, A.; Turco, S.; Alberti, G. LAI Determination in Forestry Ecosystem by Lidar Data Analysis. Proceedings of Workshop 3D Remote Sensing in Forestry, Vienna, Austria, 14–15 February 2006.
Figure 1. The Texas Hill Country American Viticultural Area (THAVA) located in central Texas, west of Austin and northwest of San Antonio. THCAVA wineries are clustered in the eastern portions of the vast viticultural area.
Figure 1. The Texas Hill Country American Viticultural Area (THAVA) located in central Texas, west of Austin and northwest of San Antonio. THCAVA wineries are clustered in the eastern portions of the vast viticultural area.
Remotesensing 05 02164f1
Figure 2. The study vineyard blocks, located in the Texas Hill Country American Viticultural Area, are shown outlined in red. The dashed line separates the study blocks. The western block is significantly younger than the eastern block, leading to desirable variation in vine canopy structure and density across the site.
Figure 2. The study vineyard blocks, located in the Texas Hill Country American Viticultural Area, are shown outlined in red. The dashed line separates the study blocks. The western block is significantly younger than the eastern block, leading to desirable variation in vine canopy structure and density across the site.
Remotesensing 05 02164f2
Figure 3. The filtered point cloud of ground (gray) and non-ground (orange) points. The SfM method provides accurate visualizations of the study site with the vine rows in the foreground as well as a fence and taller trees in the background.
Figure 3. The filtered point cloud of ground (gray) and non-ground (orange) points. The SfM method provides accurate visualizations of the study site with the vine rows in the foreground as well as a fence and taller trees in the background.
Remotesensing 05 02164f3
Figure 4. A comparison of an actual UAV captured image (a) and the filtered point cloud (b) for the same area. In (b), ground points are gray and non-ground points are orange. For both (a) and (b), the extent of the study vineyard is shown with a red outline.
Figure 4. A comparison of an actual UAV captured image (a) and the filtered point cloud (b) for the same area. In (b), ground points are gray and non-ground points are orange. For both (a) and (b), the extent of the study vineyard is shown with a red outline.
Remotesensing 05 02164f4
Figure 5. The density of points (local average number of points per square meter) for both the ground and non-ground is highest in the central and western portions of the vineyard block where the most overlap in UAV images occurred.
Figure 5. The density of points (local average number of points per square meter) for both the ground and non-ground is highest in the central and western portions of the vineyard block where the most overlap in UAV images occurred.
Remotesensing 05 02164f5
Figure 6. Three-dimensional visualization of the study vineyard (ac) including sample vines (red poles) with highlighted sample rows, non-ground point cloud (green spheres), projected trellis wiring (gray lines), and underlying DTM surface. (a) Whole vineyard scale showing GCPs as red squares with inner white circles. (b) Partial-vineyard scale showing the clustering of points representing the individual vine row canopies. (c) Per-vine scale highlights the extraction zones for point inclusion/exclusion (transparent brown) and actual points (green).
Figure 6. Three-dimensional visualization of the study vineyard (ac) including sample vines (red poles) with highlighted sample rows, non-ground point cloud (green spheres), projected trellis wiring (gray lines), and underlying DTM surface. (a) Whole vineyard scale showing GCPs as red squares with inner white circles. (b) Partial-vineyard scale showing the clustering of points representing the individual vine row canopies. (c) Per-vine scale highlights the extraction zones for point inclusion/exclusion (transparent brown) and actual points (green).
Remotesensing 05 02164f6aRemotesensing 05 02164f6b
Figure 7. Canopy density (measured LAI) across the study vineyard blocks. The eastern portion of the vineyard is notably denser, which relates to more mature vines being located there.
Figure 7. Canopy density (measured LAI) across the study vineyard blocks. The eastern portion of the vineyard is notably denser, which relates to more mature vines being located there.
Remotesensing 05 02164f7
Figure 8. Scatterplot of LAI predicted with SfM height metrics (Y-axis) and field-measured LAI (X-axis). The line black line indicates the regression fit while the gray line indicates a one-to-one relationship between observed and predicted LAI.
Figure 8. Scatterplot of LAI predicted with SfM height metrics (Y-axis) and field-measured LAI (X-axis). The line black line indicates the regression fit while the gray line indicates a one-to-one relationship between observed and predicted LAI.
Remotesensing 05 02164f8
Table 1. Inputs and outputs of PhotoScan and LP360 point cloud processing.
Table 1. Inputs and outputs of PhotoScan and LP360 point cloud processing.
Total ImagesDiscarded ImagesInput ImagesEntire Point CloudNoise RemovedClassified
GroundNon-Ground
2065201462,95930,775333,83598,349
100.0%2.4%97.6%100.0%6.7%72.1%21.2%
Table 2. Results of stepwise multiple regression predicting LAI based on SfM derived metrics.
Table 2. Results of stepwise multiple regression predicting LAI based on SfM derived metrics.
R2: 0.567 | R2 Adj.: 0.495 | RMSE: 0.236, n: 44 | F Ratio: 7.86 | p < 0.0001
TermEstimateStandard Errort RatioProb > |t|, α = 0.05
Intercept4.610.9794.71<0.001
Var4.771.972.420.020
CV−5.051.58−3.190.003
Per5−2.910.565−5.16<0.001
Per91.850.4221.38<0.001
Per10-5−0.7160.289−2.480.018
RatioPer6−2.450.996−2.510.017

= 4.61 + (4.77 × Var) − (5.05 × CV) − (2.91 × Per5) + (1.85 × Per9) − (0.716 × Per10-5) − (2.45 × RatioPer6)

Share and Cite

MDPI and ACS Style

Mathews, A.J.; Jensen, J.L.R. Visualizing and Quantifying Vineyard Canopy LAI Using an Unmanned Aerial Vehicle (UAV) Collected High Density Structure from Motion Point Cloud. Remote Sens. 2013, 5, 2164-2183. https://doi.org/10.3390/rs5052164

AMA Style

Mathews AJ, Jensen JLR. Visualizing and Quantifying Vineyard Canopy LAI Using an Unmanned Aerial Vehicle (UAV) Collected High Density Structure from Motion Point Cloud. Remote Sensing. 2013; 5(5):2164-2183. https://doi.org/10.3390/rs5052164

Chicago/Turabian Style

Mathews, Adam J., and Jennifer L. R. Jensen. 2013. "Visualizing and Quantifying Vineyard Canopy LAI Using an Unmanned Aerial Vehicle (UAV) Collected High Density Structure from Motion Point Cloud" Remote Sensing 5, no. 5: 2164-2183. https://doi.org/10.3390/rs5052164

APA Style

Mathews, A. J., & Jensen, J. L. R. (2013). Visualizing and Quantifying Vineyard Canopy LAI Using an Unmanned Aerial Vehicle (UAV) Collected High Density Structure from Motion Point Cloud. Remote Sensing, 5(5), 2164-2183. https://doi.org/10.3390/rs5052164

Article Metrics

Back to TopTop