Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Integrating Heterosis for Root Architecture and Nitrogen Use Efficiency of Maize: A Comparison between Hybrids from Different Decades
Previous Article in Journal
Effects of Continuous Cropping of Codonopsis pilosula on Rhizosphere Soil Microbial Community Structure and Metabolomics
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prediction of Useful Eggplant Seedling Transplants Using Multi-View Images

1
College of Mechanical and Electrical Engineering, Hebei Agricultural University, Baoding 071000, China
2
State Key Laboratory of North China Crop Improvement and Regulation, Hebei Agricultural University, Baoding 071000, China
*
Author to whom correspondence should be addressed.
Agronomy 2024, 14(9), 2016; https://doi.org/10.3390/agronomy14092016
Submission received: 27 May 2024 / Revised: 6 August 2024 / Accepted: 7 August 2024 / Published: 4 September 2024
(This article belongs to the Section Precision and Digital Agriculture)
Figure 1
<p>Flow diagram of the information processing process.</p> ">
Figure 2
<p>Schematic diagram of an image acquisition device comprising (<b>A</b>) computer (to process image data and for 3D reconstruction), (<b>B</b>) camera (image collection), (<b>C</b>) rotation platform (rotating while carrying seedlings), and (<b>D</b>) eggplant seedling (experimental materials).</p> ">
Figure 3
<p>Point cloud reconstruction and preprocessing results: <b>A</b>(<b>1</b>)–<b>D</b>(<b>1</b>) shows the primary seedlings, <b>A</b>(<b>2</b>)–<b>D</b>(<b>2</b>) shows the secondary seedlings, and <b>A</b>(<b>3</b>)–<b>D</b>(<b>3</b>) shows the unhealthy seedlings; <b>A</b>(<b>1</b>)–<b>A</b>(<b>3</b>) shows the point cloud plants after 3D reconstruction; <b>B</b>(<b>1</b>)–<b>B</b>(<b>3</b>) shows the results of fast Euclidean clustering; <b>C</b>(<b>1</b>)–<b>C</b>(<b>3</b>) shows the results based on colour threshold filtering; <b>D</b>(<b>1</b>)–<b>D</b>(<b>3</b>) shows the results of voxel filtering.</p> ">
Figure 4
<p>Completion of missing point clouds: (<b>A</b>) shows the plant containing missing leaves; (<b>B</b>) shows the segmented incomplete leaves; (<b>C</b>) shows the missing leaves with the RGB data removed; (<b>D</b>) shows the purple missing section generated by PF-Net prediction; (<b>E</b>) shows the completed leaves; (<b>F</b>) shows the entire plant after point cloud completion.</p> ">
Figure 5
<p>The fitting performance of phenotype extraction values based on the 3D point cloud and manual measurements: (<b>A</b>) primary seedling plant height (actual vs. predicted); (<b>B</b>) primary seedling stem diameter (actual vs. predicted); (<b>C</b>) primary seedling number of leaves (random deviation obtained in the <span class="html-italic">x</span>- and <span class="html-italic">y</span>-axis directions for the same values) (actual vs. predicted); (<b>D</b>) secondary seedling plant height (actual vs. predicted); (<b>E</b>) secondary seedling stem diameter (actual vs. predicted); (<b>F</b>) secondary seedling number of leaves (random deviation obtained in the <span class="html-italic">x</span>- and <span class="html-italic">y</span>-axis directions for the same values) (actual vs. predicted).</p> ">
Figure 6
<p>Box plot data distribution for each primary and secondary seedling parameter: (<b>A</b>) data distribution on the primary and secondary seedling number of leaves; (<b>B</b>) data distribution on the primary and secondary seedling plant heights; (<b>C</b>) data distribution on the primary and secondary seedling stem diameters.</p> ">
Figure 7
<p>Model convergence in testing: (<b>A</b>) accuracy variation comparison; (<b>B</b>) loss variation comparison.</p> ">
Figure 8
<p>Confusion matrix of different models.</p> ">
Versions Notes

Abstract

:
Traditional deep learning methods employing 2D images can only classify healthy and unhealthy seedlings; consequently, this study proposes a method by which to further classify healthy seedlings into primary seedlings and secondary seedlings and finally to differentiate three classes of seedling through a 3D point cloud for the detection of useful eggplant seedling transplants. Initially, RGB images of three types of substrate-cultivated eggplant seedlings (primary, secondary, and unhealthy) were collected, and healthy and unhealthy seedlings were classified using ResNet50, VGG16, and MobilNetV2. Subsequently, a 3D point cloud was generated for the three seedling types, and a series of filtering processes (fast Euclidean clustering, point cloud filtering, and voxel filtering) were employed to remove noise. Parameters (number of leaves, plant height, and stem diameter) extracted from the point cloud were found to be highly correlated with the manually measured values. The box plot shows that the primary and secondary seedlings were clearly differentiated for the extracted parameters. The point clouds of the three seedling types were ultimately classified directly using the 3D classification models PointNet++, dynamic graph convolutional neural network (DGCNN), and PointConv, in addition to the point cloud complementary operation for plants with missing leaves. The PointConv model demonstrated the best performance, with an average accuracy, precision, and recall of 95.83, 95.83, and 95.88%, respectively, and a model loss of 0.01. This method employs spatial feature information to analyse different seedling categories more effectively than two-dimensional (2D) image classification and three-dimensional (3D) feature extraction methods. However, there is a paucity of studies applying 3D classification methods to predict useful eggplant seedling transplants. Consequently, this method has the potential to identify different eggplant seedling types with high accuracy. Furthermore, it enables the quality inspection of seedlings during agricultural production.

1. Introduction

Eggplant (Solanum melongena L.) is a widely cultivated vegetable valued for its distinctive flavour, texture, and nutritional properties. Additionally, eggplant skin is a significant source of bioactive compounds, including polyphenols, flavonoids, and dietary fibre, which possess antioxidant, antimicrobial, and anticancer properties [1,2]. The rapid development of horticultural facilities has led to an increasing demand for high-quality vegetables, placing higher requirements on seedling quality during transplantation. However, most facilities rely on production experience and the naked eye to judge whether the seedlings meet transplantation standards, with the problems of high workload and current identification standards not being accurate or efficient. In addition, manually collected plant phenotypes have the disadvantage of insufficient information; these methods have a certain degree of subjectivity and cannot satisfy factory and standardised production requirements, which greatly affects the breeding process and hinders the long-term development of breeding trials [3,4,5], current research on plant phenotypes lags far behind the level of gene sequencing technology, and phenomics research struggles to meet the needs of genomics research. With the maturation of digital image processing technology, the application of non-destructive seedling testing based on imaging technology is becoming increasingly prevalent [6,7]. Fu et al. [4] proposed a method for detecting useful leafy vegetable seedling transplants using machine vision technology. This method involves deriving the seedling leaf area pixel value through the greyscale of the two-dimensional (2D) image, followed by threshold segmentation, corrosion, expansion, and regional division. The resulting value was compared with the set standard value to reject unqualified seedlings with a combined accuracy of 85%. Tong et al. [8] developed a seedling quality identification system based on an improved watershed algorithm to segment and calculate the leaf area from 2D images of overlapping tomato, cucumber, eggplant, and tomato leaves. The system was then used to compare the results with the set standard values to determine seedling quality. The final accuracies of the seedling quality identification were 98.6, 96.4, 98.6, and 95.2%, respectively. Jin et al. [5] proposed a selective transplantation method for lettuce seedlings based on the ResNet18 network, in which 2D images of healthy and unhealthy seedlings were screened, and 900 seedlings were selected for the validation test, in which the accuracy of screening was 97.44%. The unhealthy seedlings were removed, which effectively improved the transplantation survival rate. Tang et al. [9] proposed an accurate method for detecting tomato leaf diseases based on 2D images. This method introduced an attention mechanism to the PLPNet network, which eliminated the influence of the soil background on disease detection, thus improving the accuracy and specificity of the detection process. Although seedling detection through 2D images is a common practice in the agricultural field, 2D images can only extract local feature information of the plant, such as leaf length, width, and area, as well as other phenotypic information through the top view, and plant height, number of leaves, and other phenotypic information through the front view [10,11,12]; however, it is difficult to extract sufficient information from a single 2D image, such as stem thickness and degree of leaf curl. Moreover, most non-destructive testing techniques based on 2D images are limited to the classification of plants into two categories: healthy and unhealthy. For example, Jin et al. [5] pointed out that because lettuce seedlings were divided into datasets according to whether they met the criteria of three leaves and one heart, it was difficult to identify the seedling species from 2D images alone, resulting in poor seedling classification. In addition, there are certain subjective and deliberate influences when dividing the dataset, which ultimately leads to incorrect seedling identification. Therefore, they are also considered able to identify the seedling species through a 3D point cloud.
In the modern agricultural industry, there is an urgent need for the development of more advanced non-destructive testing techniques to facilitate a better understanding of crop growth status and useful seedling transplant estimation. Among these, three-dimensional (3D) reconstruction technology, as a representative detection and analysis technology, can comprehensively extract plant characteristics that are widely used in the agricultural field [13]. Li et al. [14] established a 3D point cloud of maize seedlings using multi-view images and compared maize plant height, leaf length, leaf width, and relative leaf area measured in the point cloud with the corresponding manual measurements. The results demonstrated a high degree of correlation, with coefficients of determination (R2) of 0.991, 0.989, 0.926, and 0.963, respectively. Therefore, this method was found to be effective for extracting 3D phenotypic information from maize seedlings. Andújar et al. [15] employed the structure from motion (SfM) method for the 3D modelling of weed plants, using image sequences acquired from three different perspectives. This approach enabled accurate estimation of the actual values of plant height, leaf area, and plant dry biomass. Zhang et al. [16] generated point cloud models of various objects, including a hammer, cardboard, book, multimeter, and potted plant, using multi-view images. They then extracted phenotypic parameter information (length, width, height, etc.) from the models and found that the relative error was no more than 6.67% when compared with manually measured values. Furthermore, the method could reconstruct a panoramic 3D point cloud of objects. Wei et al. [17] proposed a 3D reconstruction method for soybean plants based on SfM, which enabled the extraction of phenotypic information such as plant height, leaf length, and leaf width. The results demonstrated that the root means square error (RMSE) values were 0.9997, 0.2357, and 0.2666 cm, the mean absolute percentage error (MAPE) values were 2.7013%, 1.4706%, and 1.8669%, and the coefficients of determination (R2) were 0.9775, 0.9785, and 0.9487, respectively, when compared to manual measurements, and the method was able to extract the 3D phenotypic information of soybean plants effectively and non-destructively.
Thus, 3D technology can be used to extract phenotypic information from crops accurately. However, it is only possible to distinguish healthy seedlings from unhealthy ones by extracting phenotypic information, and it is challenging to differentiate between healthy and unhealthy seedlings by quantifying phenotypic information. Nevertheless, the classification of 3D point clouds can address this issue, enabling the distinction between healthy and unhealthy seedlings. Among the various methods for classifying and segmenting 3D data, Hopkinson et al. [18] employed ResNet152 to classify 3D reconstructed coral reefs and successfully identified 10 distinct types. Yang et al. [19] acquired point cloud data of trees in a forest using LiDAR, labelled the location of tree tips by combining the watershed and K-means clustering algorithms, and then segmented single trees. Liu et al. [20] proposed a method for automatically classifying leaves and stems of potted plants based on point cloud data. Leaf and stem samples were automatically selected using a 3D convex hull algorithm and 2D projected grid density, respectively, to construct a training set. The point cloud data were then classified into leaf and stem points using a support vector machine algorithm, resulting in an automated classification method with good overall performance in terms of accuracy and runtime. Liu et al. [21] generated 3D point clouds of single trees belonging to eight different species and classified and identified them using four distinct types of models. The following models were used: PointConv, PointMLP, PointNet++, and DGCNN. Among the models, PointConv exhibited the highest classification accuracy (99.63%) for tree species, which is beneficial for investigating forest resources. Zhu et al. [22] successfully identified Leighton Buzzard sand in soil via the 3D reconstruction and classification of particles using PointConv with an accuracy rate of 100%, which aided in the study of particle kinematics and mechanical responsivity of granular soils during soil laboratory testing. The 3D classifications described above yielded promising results. However, there is a paucity of studies investigating the 3D classification of different types of plant seedlings.
In this study, multi-view images of eggplant seedlings at the age of 30–40 d cultivated on a substrate were collected using an RGB camera, and the eggplant seedlings were classified into three categories—primary, secondary, and unhealthy—with reference to the Zhejiang provincial local standard technical regulations for eggplant green production DB33/T 2321-2021. First, healthy and unhealthy seedlings were classified using ResNet50, VGG16, and MobilNetV2. Second, the SfM algorithm was employed to establish a 3D point cloud model of eggplant seedlings. This was then preprocessed using a fast Euclidean clustering algorithm and colour-based threshold and voxel filtering. The phenotypic traits of the seedlings, including plant height, stem diameter, and number of leaves, were extracted to classify healthy seedlings into primary and secondary seedlings. Finally, three algorithms were employed to classify the 3D point cloud of the three seedling types: PointNet++, DGCNN, and PointConv. Simultaneously, this enhances classification accuracy and addresses the issue of missing sections of the point cloud. A complementary operation was employed, and the classification effect of the PointConv model was found to be optimal. In the test set, the model demonstrated enhanced accuracy in distinguishing among the three seedling types. The availability of eggplant seedlings for transplantation using the detection method was identified as a potential area for improvement.

2. Materials and Methods

2.1. Experimental Materials

In this study, eggplant seedlings cultivated on a substrate were selected and divided into three categories with reference to DB33/T 2321-2021. Primary seedlings were defined as those aged between 30 and 40 d, with six to seven true leaves, a plant height of 10–15 cm, a stem diameter of approximately 0.5 cm, and a well-developed root system. Secondary seedlings were defined as healthy seedlings that did not meet the standards of primary seedlings. Unhealthy seedlings were defined as seedlings with yellowish or brownish leaves and obvious diseases and pests. There were 80 plants in each category, for a total of 240 plants.

2.2. Experiment Design

The flow chart of the experimental design of this paper is shown in Figure 1, where images of eggplant seedlings were first collected, and then, based on the collected images, the 2D classification algorithms ResNet50, VGG16, and MobilNetV2 were used to classify healthy and unhealthy seedlings. Next, a 3D point cloud was generated for the three seedling types, and a series of filtering processes were employed to remove noise. After building the point cloud, we extracted the point cloud parameters (number of leaves, plant height, and stem diameter) to classify primary and secondary seedlings and classified three seedlings (primary, secondary, and unhealthy) using the 3D classification models PointNet++, DGCNN, and PointConv.

2.3. Image Acquisition

The imaging system used for image acquisition is shown in Figure 2. An RGB industrial camera (Model NO. FSFE-3200D-10GE, JAI) was used for image acquisition. The camera comprises two 3.2-megapixel CMOS imagers mounted on prisms, which enable it to maintain a high degree of stability during the shooting process and achieve image alignment, regardless of changes in the shooting angle. To create a clear and less noisy point cloud dataset, the camera was adjusted to 45° from the horizontal direction. The camera was positioned at a distance of 0.65 m from the object, the resolution of the acquired images was 1920 × 1080 px, and the focal length of the lens was set to 25 mm. The eggplant seedlings were placed in the centre of a rotating table, which carried the plants at 360° rotation with a rotational speed of 1 r/min. Each seedling was imaged using 100 images that were then transferred to a computer for storage.

2.4. 2D Image Classification Models

The ResNet50 model incorporates a residual structure (identity mapping), in contrast to the traditional convolutional neural network, which transforms the original transform function H(x) to F(x) + x. This results in a network that is no longer a simple stack structure and addresses the issue of gradient vanishing through the residual network. The straightforward stack structure enhances the efficacy and efficiency of the network training without necessitating the introduction of additional computational parameters.
VGG16 comprises five convolutional layers, each with two to three convolutional layers. To optimise feature information extraction, this study employed a three-convolutional-layer configuration per segment with a maximum pooling layer at the conclusion of each segment which served to reduce the image size. Each segment comprised the same number of convolutional kernels, with the number of kernels increasing towards the fully connected layer, resulting in a reduction in image size. Regarding VGG16, the use of smaller convolutional kernels resulted in a reduction in the number of parameters and computational resources, which, in turn, allowed the network to perform better in terms of feature extraction.
The underlying principle of MobilNetV2 is to use a reduced number of parameters instead of a conventional 3 × 3 convolution. The network comprised two distinct components: a depth-separable convolution and a 1 × 1 normal convolution. The depth-separable convolution employs a convolutional kernel size of 3 × 3 for feature extraction, whereas the 1 × 1 normal convolution is responsible for tuning the number of channels, thereby significantly reducing the parameter calculation.

2.5. 3D Reconstruction Based on SfM

Image-based 3D reconstruction is a technique for the construction of 3D models from 2D images. SfM is one of the 3D reconstruction methods, and its principle is to apply a matching algorithm to the obtained multi-view image sequences in order to obtain the correspondence of the same pixel points in the image. Subsequently, the 3D model of the object is reconstructed using the matching constraints in conjunction with the principle of triangular sectioning to obtain the 3D coordinates of the spatial points. The reconstruction process encompasses several key steps, including feature point extraction and matching and sparse and dense point cloud reconstructions. Furthermore, the SfM algorithm is capable of self-correction and exhibits less susceptibility to external noise.

2.6. Point Cloud Preprocessing

In this study, the primary software used for SfM-based 3D reconstruction was Agisoft Metashape (version 1.6, Agisoft LLC, St. Petersburg, Russia) and CloudCompare [23]. For multi-viewpoint images obtained from RGB camera shots, Agisoft Metashape was used for sparse and dense reconstruction of the point cloud. The RGB camera used in this study is an industrial-grade camera that can obtain high-quality images with large file data sizes, whereas Agisoft Metashape can process high-quality image sequences with higher accuracy, which is better when applied to 3D reconstruction; the reconstructed 3D point cloud was visualised using CloudCompare (version 2.13.2). The specific flow diagram is shown in Figure 2.

2.6.1. Fast Euclidean Clustering Algorithm for Image Background Removal

The fast Euclidean clustering algorithm performs clustering by calculating the Euclidean distance between points in the point cloud. First, each data point is numbered, and all data points are divided into several subspaces. The distance between the subspace where the target point is located and the data points in the neighbouring subspaces is calculated, the points of the same class are divided into one class according to the set radius size and the number of points, and the process is looped. Finally, the irrelevant background is removed to achieve clustering. This method only needs to calculate the distance between the subspace and the points in the neighbouring subspace, instead of calculating the distance between all the data points, which can significantly reduce the number of times the distance is calculated, thus improving the clustering efficiency [24].

2.6.2. Point Cloud Filtering Based on Colour Threshold

The purpose of filtering based on the colour threshold is to remove noise points. The RGB colour threshold is set after obtaining the colour information of the point cloud. As there is a more significant difference between the RGB values of the noise and the RGB values of the eggplant leaves, the threshold can be determined based on the difference so that the white noise at the edges of the leaves can be removed [14].
First, all points in the point cloud were traversed to obtain the RGB value of each point, and the values of each point in the R, G, and B channels were defined as r, g, and b. The thresholds were subsequently calculated according to Equations (1) and (2) to obtain the point cloud colour threshold distribution. If the threshold distribution range is satisfied and g > r and g > b, the point cloud is retained as part of the plant; otherwise, the point cloud is removed as a noise point.
S r g b = r + b + g a b S r g = r g a b S b g = b g a b S r b = r b ,
R r g = a b S r g S r g b R b g = a b S b g S r g b R r b = a b S r b S r g b ,
where Srgb is the result of accumulating r, g, and b; absrg is the absolute value of the difference between r and g; absbg is the absolute value of the difference between b and g; absrb is the absolute value of the difference between r and b; Rrg is the ratio of absrg to Srgb; Rbg is the ratio of absbg to Srgb; and Rrb is the ratio of absrb to Srgb.

2.6.3. Point Cloud Down Sampling Based on Voxel Filtering

Point cloud voxel filtering aims to reduce the number of points in a point cloud without damaging its structure. Tiny square pixel grids (voxel grids) are first created in the point cloud data; subsequently, all points in each voxel grid are approximated to their centre of gravity, which is retained, the remaining points are deleted, and one point is used to represent all points in the voxel grid. This method reduces the number of point clouds while preserving their morphological characteristics and improving computational efficiency.

2.6.4. Point Cloud Completion through a Point Fractal Network (PF-Net)

A PF-Net model was used to supplement the incomplete point cloud by first eliminating its RGB, leaving only XYZ information. Next, the iterative farthest point sampling multi-scale downsampling algorithm was used to extract the point cloud features and the features from different depths were merged. Finally, a multi-scale point cloud was generated by the point cloud pyramid decoder to complete the point cloud complementation task. Additionally, during the training process, the chamfer distance was used as a loss function to train the model [25].

2.7. Phenotypic Feature Extraction Based on 3D Point Cloud

2.7.1. Phenotype Calculation Method

Plant height measured in this study was the distance from the point of contact between the plant and soil to the top of the eggplant seedling. First, the direction of growth of the eggplant seedling was aligned with the positive direction of the z-axis using the translation rotation matrix; subsequently, all the point clouds were iterated to find the maximum and minimum values of the point clouds on the z-axis; finally, the height of the plant was obtained by taking the difference, and simultaneously, the maximum and minimum values of the point clouds on the x- and y-axes were calculated, and by averaging the two, the stem diameter was calculated. The calculation formula is given in Equation (3).
h = z m a x z m i n c 1 = x m a x x m i n c 2 = y m a x y m i n c = c 1 + c 2 2 ,
where h indicates the height of the eggplant seedling; c indicates the stem diameter of the eggplant seedling; xmax, ymax, and zmax indicate the maximum values of the point cloud on the x-, y-, and z-axes, respectively; and xmin, ymin, and zmin indicate the minimum values of the maize point cloud on the x-, y-, and z-axes, respectively.

2.7.2. Point Cloud Segmentation

In this study, a regional growth segmentation algorithm was used to segment the point clouds of eggplant seedlings. This method can better identify and segment plant organs such as seedling leaves and stems. The principle of the regional growth algorithm is to collect point clouds with similarities to form a region. First, for each region, a seed point was identified as the starting point of growth. Next, points with the same or similar properties near the seed point were merged into the region where the seed was located. New points continue to grow in all directions, such as the seed, until no more points satisfy the conditions for inclusion. In this algorithm, the output data structure is an array of clusters, each of which is considered a collection of points that are part of the same smooth surface, that is, the number of leaves.

2.7.3. Point Cloud Coordinate Scale Transformation

To obtain the dimensional relationship between the plant point cloud in virtual 3D space and the actual plant, it is necessary to find a corresponding reference to calculate the scaling ratio, as shown in Equation (4). To obtain the scaling between the eggplant 3D point cloud model and the actual eggplant plant, we used a chequerboard grid as a reference to calculate the transformation scale.
k = L r e a l L v r i t u a l ,
where Lreal is the actual length of the chequerboard grid, Lvirtual is the length of the reconstructed chequerboard grid model, and k is the conversion ratio. A fixed-size chequerboard grid (25 × 25 mm/grid) was used during the image acquisition process, and the actual size of the reconstructed plant in the real world was obtained by calculating the conversion ratio.

2.8. 3D Point Cloud Classification Models

2.8.1. PointNet++ Model

The PointNet++ model first calculates the position information of the sample points according to the farthest point sampling method, then groups the K-nearest neighbour points with the sample points as the centre and the set radius size; simultaneously, the feature information is extracted within the group by PointNet, and this process is repeated by setting different centres and radii. Using the process of extracting multiple groups of local feature information, finally fusing multiple groups of local feature information into the overall features through the fully connected layer and classifying according to the overall features, this method not only retains the local feature information of the sample points but also extracts the overall feature information [26,27].

2.8.2. DGCNN Model

The DGCNN is a point-based neural network structure that combines a 3D point cloud with a convolutional neural network [28]. The DGCNN primarily consists of a spatial transformation module used to spatially transform the point cloud to improve the spatial invariance of the input point cloud and an EdgeConv module. The EdgeConv module is used as a multi-layer perceptron (MLP) to learn the local geometric features of each layer of the point cloud. First, the spatially transformed point cloud was obtained using the spatial transformation module with a convolutional operation and matrix multiplication (matrix computation), maintaining the unchanged arrangement of points in the spatial region; subsequently; the points in space were layered, and the feature information of each layer was captured by four successive EdgeConv modules. The EdgeConv modules captured the local geometric features of the points that were missing in previous point-based deep learning frameworks [29]. Instead of convolving fixed layers, EdgeConv dynamically updated the points in its neighbourhood for each layer and captured the feature information of the local region for each layer using MLP, increasing the spatial coverage of the neighbourhood before convolution, continuously converting the points using a fully connected layer. The maximum pooling layer continuously fused the feature information of the local space to extract the overall features, which, in turn, achieved the purpose of classification [30].

2.8.3. PointConv Model

PointConv extends the dynamic filter to three dimensions for the convolutional operation; first, the centroid was determined for the point cloud by the farthest point sampling method, and subsequently, the K nearest neighbours around the centroid were found via the Euclidean distance, forming a local region G. For each local region, the weight function W δ x ,   δ y ,   δ z , inverse density scale S δ x ,   δ y ,   δ z , and centroid feature F x + δ x ,   y + δ y ,   z + δ z were used to compute the interactions between local features and the loss of the model; finally, the local features were fused and the overall features were extracted using the fully connected layer [21]. PointConv is expressed as follows:
P o i n t C o n v S , W , F x y z = δ x , δ y , δ z G S δ x , δ y , δ z W δ x , δ y , δ z F x + δ x , y + δ y , z + δ z ,
where δ x ,   δ y ,   δ z represents any position in a local region,   S δ x ,   δ y ,   δ z represents the inverse density, W δ x ,   δ y ,   δ z originates from the 3D coordinates δ x ,   δ y ,   δ z via multi-layer perceptron, and F x + δ x ,   y + δ y ,   z + δ z represents the characteristics of the points centred at point (x, y, z) in local region G. Each local region can be in any position within the region [22,31].

3. Results

3.1. Point Cloud Preprocessing Results

The results of the point cloud preprocessing are shown in Figure 3. A fast Euclidean clustering algorithm was used to remove irrelevant background and noise points, as shown in Figure 3(B(1)–B(3)). This filtering method removes the irrelevant background and extracts the complete plant from the space. A filtering method based on the colour threshold was employed to remove noise points on the edges of the leaves, as illustrated in Figure 3(C(1)–C(3)). Following colour threshold filtering, the noise points at the edges of the plants were significantly reduced in Figure 3(C(1)–C(3)) compared to those in Figure 3(B(1)–B(3)). Furthermore, the filtered point cloud was downsampled for computational convenience, which resulted in a notable reduction in the number of points. This is illustrated in Figure 3(D(1)–D(3)). After voxel filtering, the number of point clouds was reduced, whereas the overall morphology of the plant was preserved.
After preprocessing, the incomplete point clouds of some leaves were subjected to point cloud completion using the PF-Net model. The overall process is illustrated in Figure 4. Initially, the leaves were segmented from the entire eggplant seedling (Figure 4A,B). Subsequently, the RGB data for the leaves were removed (Figure 4C). Finally, the missing parts of the leaves were predicted by PF-Net (Figure 4D), connected to the predicted PF-Net sections (Figure 4E), and spliced into the plant (Figure 4F), and the results of the point cloud completion were visualised using Cloud Compare. Figure 4D shows the point cloud generated by the PF-Net model completion operation. The purple portion of the leaf represents the missing element generated by the PF-Net prediction, which was integrated with Figure 4B to form a complete leaf. Figure 4D illustrates the successful completion of the plant leaves, including the inner and edge points, demonstrating the efficacy of the operation in fitting eggplant leaves. The visualisation results indicated that this method could compensate for missing points to a certain extent, resulting in a uniformly distributed point cloud.

3.2. Phenotypic Feature Extraction Results

The results of the point cloud extraction for each phenotypic parameter were compared with the manual measurement results, as illustrated in Figure 5. Phenotypic parameters, including plant height, stem diameter, and number of leaves, were calculated based on the established point cloud model. As several plants exhibited identical values for the number of leaves, the same values were offset in the x- and y-axis directions for enhanced visualisation.
The parameters extracted from the point cloud demonstrated a notable linear correlation with the manually measured values, as evidenced by the R2 and RMSE values presented in Figure 5. Box plots were generated for each extracted parameter using the same approach as that applied to the number of leaves, as illustrated in Figure 6. Although individual outliers were observed in Figure 6A–C, the overall data distribution was relatively uniform, indicating a clear distinction between the two seedling types in accordance with the manual classification results.

3.3. 2D Image and 3D Point Cloud Classification Results

The image acquisition process included side views that did not show the complete colour of the leaf portion. Consequently, only plants with complete leaf colour were selected for 2D image classification. To ensure that the sample sizes of healthy and unhealthy seedlings were comparable, the final healthy and unhealthy seedlings comprised 4800 images, which were divided into training and test sets according to a ratio of 8:2. Simultaneously, to obtain a batch size suitable for training the model to converge to the global optimum, the training parameter batch of the model was set to 16 and 32. Two different optimisation algorithms, Adam and SGD, were employed, the number of model iterations was fixed at 200, and the learning rate was set to 0.001. Furthermore, if there was no improvement in model performance after three epochs, the learning rate was reduced to continue the training. After model training was completed, the best training parameters for each model were selected for testing. The average accuracy, precision, and recall for each model are presented in Table 1. The average accuracy, precision, and recall of VGG16 were higher than those of ResNet50 and MobilNetV2. Furthermore, the number of correct predictions for the two seedling types was considerably higher than those of ResNet50 and MobilNetV2.
The classification of 2D images and 3D extracted features was limited to seedling categorisation into two distinct groups. Consequently, the established 3D point cloud was classified by dividing 80 plants from each seedling category into training and testing sets at a ratio of 7:3. To obtain a batch size suitable for training the model to converge to the global optimum, the training parameter batch sizes of the model were set to 16 and 32. Two different optimisation algorithms, Adam and SGD, were employed, and the number of model iterations was set to 200 with a learning rate of 0.001. If model performance did not improve after three epochs, the learning rate was reduced to continue the training. Following the training phase, 24 seedlings were selected for each seedling type, and the most effective training parameters for each model were selected and tested. The three models exhibited considerable variations in accuracy during the initial stage (Figure 7A). PointConv demonstrated higher accuracy at the outset and gradually stabilised after 40 epochs. Furthermore, PointNet++ and the DGCNN exhibited lower accuracy at the outset, with PointNet++ reaching a near 90% accuracy level after 50 epochs and gradually stabilising. The DGCNN, however, had an accuracy of less than 80% for the first 40 epochs, only reaching 80% accuracy at 40 epochs and subsequently increasing sharply, eventually stabilising after 80 epochs. The accuracy of the DGCNN was below 80% in the initial 40 epochs, reached 80% in 40 epochs, subsequently experienced a sharp increase, and ultimately stabilised at 80 epochs.
As illustrated in Figure 7B, the loss in each model decreased and stabilised as the period increased. The PointConv network model converged at a faster rate than the other models and exhibited the smallest loss. Furthermore, the PointNet++ model demonstrated an intermediate convergence speed, stabilising after 50 epochs with a loss not significantly different from that of PointConv. The DGCNN model exhibited the slowest convergence speed, with a sharp decline at the outset, followed by stabilisation after 100 epochs, with a final loss greater than that of both PointNet++ and PointConv.
Following the completion of model testing, the average accuracy, precision, and recall of each model are presented in Table 1. It is evident that the average accuracy, precision, and recall of the PointConv network model are superior to those of PointNet++ and DGCNN. In addition, we also added confidence intervals and significance testing, presented in Table 2 and Table 3, to strengthen the validity of reported performance metrics. The PointConv model outperforms the other five models on all performance metrics, and the differences reach the level of statistical significance (p < 0.05). The confidence intervals indicate that the PointConv model has high performance stability and a small range of fluctuations in most of the metrics. The confusion matrices for each model in the test set are shown in Figure 8. In the actual prediction process, the PointConv model incorrectly predicted only three seedlings between the primary seedling and secondary seedling. However, the prediction accuracy was significantly higher than that of the other models.

4. Discussion

In this study, we employed three classification models that are more widely and universally used and that have high accuracy on public datasets, ResNet50, VGG16, and MobilNetV2, to distinguish between healthy and unhealthy seedlings. The overall accuracy of the three models on the test set was low, and there was misclassification between the two seedling types. This may be due to the fact that we classify the entire eggplant seedling directly, and the unhealthy seedling has basically the same colour characteristics as the healthy seedling, except for the colour of the leaves, which differs from that of healthy seedlings. Healthy seedlings are essentially indistinguishable from each other in terms of colour features. When the dataset was divided, only seedlings exhibiting one cotyledon or 80% or more of one cotyledon with incorrect colouration were classified as unhealthy. If two seedlings exhibit only minor differences in their features, they may be classified as belonging to the same category by the model, which can result in low final classification accuracy. This is consistent with the findings of Jin et al. [5] who indicated that seedling classification was ambiguous because the leaf area of lettuce seedlings did not meet the criteria of three leaves and one heart. Additionally, there was a subjective and conscious influence in dividing the dataset, which ultimately led to seedling misidentification.
The results of the parameters extracted from the 3D point cloud showed that the differences between the extracted parameters and the manually measured values were relatively small. For the box plot of the extracted parameters, the parameters values of the two seedling types were clearly differentiated, which was also consistent with the classification of the manually measured values. However, for the box plot of each parameter, the number of leaves, plant height, and stem thickness exhibited some outliers. These were identified as genuine observations following comparison with manual measurements and were not the result of errors in data analysis but could be due to uneven seedling growth, several of which were taller than average, with greater values for leaf number, plant height, and stem diameter, whereas others were lower than average, with lower values for each phenotypic parameter.
3D point cloud classification of the three seedling types revealed that the classification accuracy of the PointNet++, DGCNN, and PointConv models was higher than that of ResNet50, VGG16, and MobilNetV2. This effectively solves the problem of lower classification accuracy caused by the small differences in colour features between healthy and unhealthy seedlings during 2D image classification. This can be attributed to the fact that a 3D point cloud encompasses a large array of features. Although there is minimal distinction in colour features between healthy and unhealthy seedlings, there are pronounced differences in plant height and stem diameter. The 3D point cloud can identify these distinctions, enabling differentiation between the two seedling types. Medeiros et al. [32] also identified this issue through the non-destructive testing of soybean seeds using 2D images. This resulted in the misclassification of different categories of seeds because of the inability of the captured 2D images to cover a single seed fully. They also considered the application of 3D images to further enhance the classification process. During the point cloud classification process, all three models demonstrated greater accuracy in identifying unhealthy seedlings. However, some degree of misclassification was observed, with primary and secondary seedlings being incorrectly identified. This may be attributed to the fact that the height and stem thickness traits were classified as 10 and 0.5 mm, respectively, when dividing the dataset. Additionally, some secondary seedlings were in a critical state, which may have resulted in their classification as primary seedlings. This ultimately led to classification errors and a reduction in classification accuracy. The PointConv model exhibited the highest accuracy and lowest model loss among the three models. This may be attributed to the capacity of the PointConv model to calculate the model loss and fuse local information in a more comprehensive manner. This is achieved by calculating the weight function W, inverse density scale S, and centroid feature F, while calculating the model loss and subsequent dynamic adjustment of the model convolutional operation.
Compared with 2D images, 3D point clouds provide more comprehensive information regarding crop feature depth. However, this comes at the cost of a longer acquisition time, largely because of the sheer number of point clouds and necessary hardware equipment. In addition, the data processing time is also longer; for our computer, the time to build a point cloud is about 30 min with the following parameters: the operating system is the Windows 64 bit system, the processor is Intel(R) Core(TM) i5-8300H, the RAM is 16GB, and the graphics card model is GTX1050. Additionally, phenotypic parameter extraction is subject to certain limitations, largely owing to imperfect acquisition equipment and the influence of the surrounding environment especially in agricultural settings. In particular, the errors are primarily attributable to the slight vibrations generated by the rotating platform and blowing of seedlings by external winds, as well as the influence of reflections on rotating platforms when exposed to natural light when the camera was taking pictures. Additionally, some irrelevant backgrounds were recorded during the image acquisition process owing to the absence of a background plate, which resulted in the inclusion of noisy points and irrelevant backgrounds in the point cloud.

5. Conclusions

This study introduces a novel approach to plant seedling classification, in which a 3D classification model is applied to determine useful eggplant seedling transplants. This is achieved by constructing 3D point clouds from multi-view images. The point clouds of the three seedling types were classified using PointNet++, DGCNN, and PointConv. The PointConv model exhibited the best classification accuracy for the test set; the average accuracy, precision, recall, and F1-score are 95. 83%, 95.83%, 95.88%, and 95.83%, respectively. Compared to 2D image classification and 3D point cloud extraction features, this method is capable of classifying seedlings into three categories with greater accuracy, which is conducive to the non-destructive detection of eggplant seedlings throughout the agricultural production and marketing processes. Furthermore, it will enable agricultural managers and operators to formulate effective planting plans and marketing strategies. Future work will focus on enhancing the classification accuracy of the model and facilitating image acquisition.

Author Contributions

Conceptualization, X.Y. and J.L.; methodology, X.Y.; software, X.Y. and R.T.; validation, X.Y., H.W., and Y.Z.; formal analysis, X.Y. and H.W.; investigation, X.Y. and J.L.; resources, X.Y.; data curation, X.Y., J.L., and R.T.; writing—original draft preparation, X.Y.; writing—review and editing, X.F.; visualization, X.Y. and Y.Z.; supervision, X.F.; project administration, X.F.; funding acquisition, J.L. and X.F. All authors have read and agreed to the published version of the manuscript.

Funding

This study is supported by the National Natural Science Foundation of China (32072572), the earmarked fund for CARS (CARS-23), and the Innovative Research Group Project of Hebei Natural Science Foundation (C2020204111).

Data Availability Statement

The datasets generated during the current study are not publicly available due to privacy restrictions but are available from the corresponding author on reasonable request.

Acknowledgments

We would like to thank Youwei Zhang and Yubo Feng for their extensive participation in the experiment.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Amulya, P.; Ul Islam, R. Optimization of Enzyme-Assisted Extraction of Anthocyanins from Eggplant (Solanum melongena L.) Peel. Food Chem. X 2023, 18, 100643. [Google Scholar] [CrossRef]
  2. Taşan, S.; Cemek, B.; Taşan, M.; Cantürk, A. Estimation of Eggplant Yield with Machine Learning Methods Using Spectral Vegetation Indices. Comput. Electron. Agric. 2022, 202, 107367. [Google Scholar] [CrossRef]
  3. Jin, X.; Li, R.; Tang, Q.; Wu, J.; Jiang, L.; Wu, C. Low-Damage Transplanting Method for Leafy Vegetable Seedlings Based on Machine Vision. Biosyst. Eng. 2022, 220, 159–171. [Google Scholar] [CrossRef]
  4. Fu, W.; Gao, J.; Zhao, C.; Jiang, K.; Zheng, W.; Tian, Y. Detection Method and Experimental Research of Leafy Vegetable Seedlings Transplanting Based on a Machine Vision. Agronomy 2022, 12, 2899. [Google Scholar] [CrossRef]
  5. Jin, X.; Tang, L.; Li, R.; Ji, J.; Liu, J. Selective Transplantation Method of Leafy Vegetable Seedlings Based on ResNet 18 Network. Front. Plant Sci. 2022, 13, 893357. [Google Scholar] [CrossRef] [PubMed]
  6. Li, L.; Bie, Z.; Zhang, Y.; Huang, Y.; Peng, C.; Han, B.; Xu, S. Nondestructive Detection of Key Phenotypes for the Canopy of the Watermelon Plug Seedlings Based on Deep Learning. Hortic. Plant J. 2023, S2468014123001267. [Google Scholar] [CrossRef]
  7. Zhao, S.; Lei, X.; Liu, J.; Jin, Y.; Bai, Z.; Yi, Z.; Liu, J. Transient Multi-Indicator Detection for Seedling Sorting in High-Speed Transplanting Based on a Lightweight Model. Comput. Electron. Agric. 2023, 211, 107996. [Google Scholar] [CrossRef]
  8. Tong, J.H.; Li, J.B.; Jiang, H.Y. Machine Vision Techniques for the Evaluation of Seedling Quality Based on Leaf Area. Biosyst. Eng. 2013, 115, 369–379. [Google Scholar] [CrossRef]
  9. Tang, Z.; He, X.; Zhou, G.; Chen, A.; Wang, Y.; Li, L.; Hu, Y. A Precise Image-Based Tomato Leaf Disease Detection Approach Using PLPNet. Plant Phenomics 2023, 5, 0042. [Google Scholar] [CrossRef] [PubMed]
  10. Zermas, D.; Morellas, V.; Mulla, D.; Papanikolopoulos, N. 3D Model Processing for High Throughput Phenotype Extraction—The Case of Corn. Comput. Electron. Agric. 2020, 172, 105047. [Google Scholar] [CrossRef]
  11. Yang, W.; Feng, H.; Zhang, X.; Zhang, J.; Doonan, J.H.; Batchelor, W.D.; Xiong, L.; Yan, J. Crop Phenomics and High-Throughput Phenotyping: Past Decades, Current Challenges, and Future Perspectives. Mol. Plant 2020, 13, 187–214. [Google Scholar] [CrossRef] [PubMed]
  12. Zhao, C.; Zhang, Y.; Du, J.; Guo, X.; Wen, W.; Gu, S.; Wang, J.; Fan, J. Crop Phenomics: Current Status and Perspectives. Front. Plant Sci. 2019, 10, 714. [Google Scholar] [CrossRef] [PubMed]
  13. Perez-Sanz, F.; Navarro, P.J.; Egea-Cortines, M. Plant Phenomics: An Overview of Image Acquisition Technologies and Image Data Analysis Algorithms. GigaScience 2017, 6, gix092. [Google Scholar] [CrossRef] [PubMed]
  14. Li, Y.; Liu, J.; Zhang, B.; Wang, Y.; Yao, J.; Zhang, X.; Fan, B.; Li, X.; Hai, Y.; Fan, X. Three-Dimensional Reconstruction and Phenotype Measurement of Maize Seedlings Based on Multi-View Image Sequences. Front. Plant Sci. 2022, 13, 974339. [Google Scholar] [CrossRef] [PubMed]
  15. Andújar, D.; Calle, M.; Fernández-Quintanilla, C.; Ribeiro, Á.; Dorado, J. Three-Dimensional Modeling of Weed Plants Using Low-Cost Photogrammetry. Sensors 2018, 18, 1077. [Google Scholar] [CrossRef] [PubMed]
  16. Zhang, X.; Liu, J.; Zhang, B.; Sun, L.; Zhou, Y.; Li, Y.; Zhang, J.; Zhang, H.; Fan, X. Research on Object Panoramic 3D Point Cloud Reconstruction System Based on Structure From Motion. IEEE Access 2022, 10, 110064–110075. [Google Scholar] [CrossRef]
  17. He, W.; Ye, Z.; Li, M.; Yan, Y.; Lu, W.; Xing, G. Extraction of Soybean Plant Trait Parameters Based on SfM-MVS Algorithm Combined with GRNN. Front. Plant Sci. 2023, 14, 1181322. [Google Scholar] [CrossRef] [PubMed]
  18. Hopkinson, B.M.; King, A.C.; Owen, D.P.; Johnson-Roberson, M.; Long, M.H.; Bhandarkar, S.M. Automated Classification of Three-Dimensional Reconstructions of Coral Reefs Using Convolutional Neural Networks. PLoS ONE 2020, 15, e0230671. [Google Scholar] [CrossRef] [PubMed]
  19. Yang, J.; Kang, Z.; Cheng, S.; Yang, Z.; Akwensi, P.H. An Individual Tree Segmentation Method Based on Watershed Algorithm and Three-Dimensional Spatial Distribution Analysis from Airborne LiDAR Point Clouds. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 1055–1067. [Google Scholar] [CrossRef]
  20. Liu, Z.; Zhang, Q.; Wang, P.; Li, Z.; Wang, H. Automated Classification of Stems and Leaves of Potted Plants Based on Point Cloud Data. Biosyst. Eng. 2020, 200, 215–230. [Google Scholar] [CrossRef]
  21. Liu, B.; Huang, H.; Su, Y.; Chen, S.; Li, Z.; Chen, E.; Tian, X. Tree Species Classification Using Ground-Based LiDAR Data by Various Point Cloud Deep Learning Methods. Remote Sens. 2022, 14, 5733. [Google Scholar] [CrossRef]
  22. Zhu, Z.; Wang, J.; Wu, M. Pattern Recognition of Quartz Sand Particles with PointConv Network. Comput. Geotech. 2023, 153, 105061. [Google Scholar] [CrossRef]
  23. Martinez-Guanter, J.; Ribeiro, Á.; Peteinatos, G.G.; Pérez-Ruiz, M.; Gerhards, R.; Bengochea-Guevara, J.M.; Machleb, J.; Andújar, D. Low-Cost Three-Dimensional Modeling of Crop Plants. Sensors 2019, 19, 2883. [Google Scholar] [CrossRef] [PubMed]
  24. Cao, Y.; Wang, Y.; Xue, Y.; Zhang, H.; Lao, Y. FEC: Fast Euclidean Clustering for Point Cloud Segmentation. Drones 2022, 6, 325. [Google Scholar] [CrossRef]
  25. Chen, H.; Liu, S.; Wang, C.; Wang, C.; Gong, K.; Li, Y.; Lan, Y. Point Cloud Completion of Plant Leaves under Occlusion Conditions Based on Deep Learning. Plant Phenomics 2023, 5, 0117. [Google Scholar] [CrossRef] [PubMed]
  26. Fan, Z.; Wei, J.; Zhang, R.; Zhang, W. Tree Species Classification Based on PointNet++ and Airborne Laser Survey Point Cloud Data Enhancement. Forests 2023, 14, 1246. [Google Scholar] [CrossRef]
  27. Zhao, Y.; Chen, H.; Zen, L.; Li, Z. Effective Software Security Enhancement Using an Improved PointNet++. J. Syst. Softw. 2023, 204, 111794. [Google Scholar] [CrossRef]
  28. Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S.E.; Bronstein, M.M.; Solomon, J.M. Dynamic Graph CNN for Learning on Point Clouds. ACM Trans. Graph. 2019, 38, 146. [Google Scholar] [CrossRef]
  29. Xie, Y.; Tian, J.; Zhu, X.X. Linking Points With Labels in 3D: A Review of Point Cloud Semantic Segmentation. IEEE Geosci. Remote Sens. Mag. 2020, 8, 38–59. [Google Scholar] [CrossRef]
  30. Widyaningrum, E.; Bai, Q.; Fajari, M.K.; Lindenbergh, R.C. Airborne Laser Scanning Point Cloud Classification Using the DGCNN Deep Learning Method. Remote Sens. 2021, 13, 859. [Google Scholar] [CrossRef]
  31. Tsai, C.-M.; Lai, Y.-H.; Sun, Y.-D.; Chung, Y.-J.; Perng, J.-W. Multi-Dimensional Underwater Point Cloud Detection Based on Deep Learning. Sensors 2021, 21, 884. [Google Scholar] [CrossRef] [PubMed]
  32. De Medeiros, A.D.; Capobiango, N.P.; Da Silva, J.M.; Da Silva, L.J.; Da Silva, C.B.; Dos Santos Dias, D.C.F. Interactive Machine Learning for Soybean Seed and Seedling Quality Classification. Sci. Rep. 2020, 10, 11267. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flow diagram of the information processing process.
Figure 1. Flow diagram of the information processing process.
Agronomy 14 02016 g001
Figure 2. Schematic diagram of an image acquisition device comprising (A) computer (to process image data and for 3D reconstruction), (B) camera (image collection), (C) rotation platform (rotating while carrying seedlings), and (D) eggplant seedling (experimental materials).
Figure 2. Schematic diagram of an image acquisition device comprising (A) computer (to process image data and for 3D reconstruction), (B) camera (image collection), (C) rotation platform (rotating while carrying seedlings), and (D) eggplant seedling (experimental materials).
Agronomy 14 02016 g002
Figure 3. Point cloud reconstruction and preprocessing results: A(1)–D(1) shows the primary seedlings, A(2)–D(2) shows the secondary seedlings, and A(3)–D(3) shows the unhealthy seedlings; A(1)–A(3) shows the point cloud plants after 3D reconstruction; B(1)–B(3) shows the results of fast Euclidean clustering; C(1)–C(3) shows the results based on colour threshold filtering; D(1)–D(3) shows the results of voxel filtering.
Figure 3. Point cloud reconstruction and preprocessing results: A(1)–D(1) shows the primary seedlings, A(2)–D(2) shows the secondary seedlings, and A(3)–D(3) shows the unhealthy seedlings; A(1)–A(3) shows the point cloud plants after 3D reconstruction; B(1)–B(3) shows the results of fast Euclidean clustering; C(1)–C(3) shows the results based on colour threshold filtering; D(1)–D(3) shows the results of voxel filtering.
Agronomy 14 02016 g003
Figure 4. Completion of missing point clouds: (A) shows the plant containing missing leaves; (B) shows the segmented incomplete leaves; (C) shows the missing leaves with the RGB data removed; (D) shows the purple missing section generated by PF-Net prediction; (E) shows the completed leaves; (F) shows the entire plant after point cloud completion.
Figure 4. Completion of missing point clouds: (A) shows the plant containing missing leaves; (B) shows the segmented incomplete leaves; (C) shows the missing leaves with the RGB data removed; (D) shows the purple missing section generated by PF-Net prediction; (E) shows the completed leaves; (F) shows the entire plant after point cloud completion.
Agronomy 14 02016 g004
Figure 5. The fitting performance of phenotype extraction values based on the 3D point cloud and manual measurements: (A) primary seedling plant height (actual vs. predicted); (B) primary seedling stem diameter (actual vs. predicted); (C) primary seedling number of leaves (random deviation obtained in the x- and y-axis directions for the same values) (actual vs. predicted); (D) secondary seedling plant height (actual vs. predicted); (E) secondary seedling stem diameter (actual vs. predicted); (F) secondary seedling number of leaves (random deviation obtained in the x- and y-axis directions for the same values) (actual vs. predicted).
Figure 5. The fitting performance of phenotype extraction values based on the 3D point cloud and manual measurements: (A) primary seedling plant height (actual vs. predicted); (B) primary seedling stem diameter (actual vs. predicted); (C) primary seedling number of leaves (random deviation obtained in the x- and y-axis directions for the same values) (actual vs. predicted); (D) secondary seedling plant height (actual vs. predicted); (E) secondary seedling stem diameter (actual vs. predicted); (F) secondary seedling number of leaves (random deviation obtained in the x- and y-axis directions for the same values) (actual vs. predicted).
Agronomy 14 02016 g005
Figure 6. Box plot data distribution for each primary and secondary seedling parameter: (A) data distribution on the primary and secondary seedling number of leaves; (B) data distribution on the primary and secondary seedling plant heights; (C) data distribution on the primary and secondary seedling stem diameters.
Figure 6. Box plot data distribution for each primary and secondary seedling parameter: (A) data distribution on the primary and secondary seedling number of leaves; (B) data distribution on the primary and secondary seedling plant heights; (C) data distribution on the primary and secondary seedling stem diameters.
Agronomy 14 02016 g006
Figure 7. Model convergence in testing: (A) accuracy variation comparison; (B) loss variation comparison.
Figure 7. Model convergence in testing: (A) accuracy variation comparison; (B) loss variation comparison.
Agronomy 14 02016 g007
Figure 8. Confusion matrix of different models.
Figure 8. Confusion matrix of different models.
Agronomy 14 02016 g008
Table 1. Classification model test results.
Table 1. Classification model test results.
ModelAverage Accuracy (%)Average Precision (%)Average Recall (%)Average F1-Score (%)
ResNet5092.1892.1892.1992.19
VGG1693.2393.2393.2393.23
MobilNetV291.6791.6791.6891.67
PointNet++94.4494.4494.5394.49
DGCNN93.0693.0693.1093.08
PointConv95.8395.8395.8895.86
Table 2. Confidence intervals according to classification.
Table 2. Confidence intervals according to classification.
ModelAverage Score (%)95% Lower (%)95% Upper (%)
ResNet5092.1991.5092.87
VGG1693.2392.5893.88
MobilNetV291.6790.9592.40
PointNet++94.4893.8295.15
DGCNN93.0792.3893.78
PointConv95.8595.3096.41
Table 3. Significance testing according to classification.
Table 3. Significance testing according to classification.
Model ComparisonT-Parametersp-Value
PointConv vs. ResNet503.460.002
PointConv vs. VGG162.810.01
PointConv vs. MobilNetV24.130.001
PointConv vs. PointNet++2.160.037
PointConv vs. DGCNN3.510.002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yuan, X.; Liu, J.; Wang, H.; Zhang, Y.; Tian, R.; Fan, X. Prediction of Useful Eggplant Seedling Transplants Using Multi-View Images. Agronomy 2024, 14, 2016. https://doi.org/10.3390/agronomy14092016

AMA Style

Yuan X, Liu J, Wang H, Zhang Y, Tian R, Fan X. Prediction of Useful Eggplant Seedling Transplants Using Multi-View Images. Agronomy. 2024; 14(9):2016. https://doi.org/10.3390/agronomy14092016

Chicago/Turabian Style

Yuan, Xiangyang, Jingyan Liu, Huanyue Wang, Yunfei Zhang, Ruitao Tian, and Xiaofei Fan. 2024. "Prediction of Useful Eggplant Seedling Transplants Using Multi-View Images" Agronomy 14, no. 9: 2016. https://doi.org/10.3390/agronomy14092016

APA Style

Yuan, X., Liu, J., Wang, H., Zhang, Y., Tian, R., & Fan, X. (2024). Prediction of Useful Eggplant Seedling Transplants Using Multi-View Images. Agronomy, 14(9), 2016. https://doi.org/10.3390/agronomy14092016

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop