Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Effects of Neuro-Cognitive Load on Learning Transfer Using a Virtual Reality-Based Driving System
Previous Article in Journal
Hardening the Security of Multi-Access Edge Computing through Bio-Inspired VM Introspection
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bag of Features (BoF) Based Deep Learning Framework for Bleached Corals Detection

1
Department of Electronics Engineering, Sejong University, Seoul 05006, Korea
2
Department of Electrical Engineering, Polytechnique Montreal, Montreal, QC H3T 1J4, Canada
3
Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Korea
*
Authors to whom correspondence should be addressed.
Big Data Cogn. Comput. 2021, 5(4), 53; https://doi.org/10.3390/bdcc5040053
Submission received: 11 September 2021 / Revised: 28 September 2021 / Accepted: 30 September 2021 / Published: 8 October 2021
(This article belongs to the Topic Applied Computer Vision and Pattern Recognition)

Abstract

:
Coral reefs are the sub-aqueous calcium carbonate structures collected by the invertebrates known as corals. The charm and beauty of coral reefs attract tourists, and they play a vital role in preserving biodiversity, ceasing coastal erosion, and promoting business trade. However, they are declining because of over-exploitation, damaging fishery, marine pollution, and global climate changes. Also, coral reefs help treat human immune-deficiency virus (HIV), heart disease, and coastal erosion. The corals of Australia’s great barrier reef have started bleaching due to the ocean acidification, and global warming, which is an alarming threat to the earth’s ecosystem. Many techniques have been developed to address such issues. However, each method has a limitation due to the low resolution of images, diverse weather conditions, etc. In this paper, we propose a bag of features (BoF) based approach that can detect and localize the bleached corals before the safety measures are applied. The dataset contains images of bleached and unbleached corals, and various kernels are used to support the vector machine so that extracted features can be classified. The accuracy of handcrafted descriptors and deep convolutional neural networks is analyzed and provided in detail with comparison to the current method. Various handcrafted descriptors like local binary pattern, a histogram of an oriented gradient, locally encoded transform feature histogram, gray level co-occurrence matrix, and completed joint scale local binary pattern are used for feature extraction. Specific deep convolutional neural networks such as AlexNet, GoogLeNet, VGG-19, ResNet-50, Inception v3, and CoralNet are being used for feature extraction. From experimental analysis and results, the proposed technique outperforms in comparison to the current state-of-the-art methods. The proposed technique achieves 99.08% accuracy with a classification error of 0.92%. A novel bleached coral positioning algorithm is also proposed to locate bleached corals in the coral reef images.

1. Introduction

Coral reefs are one of the most important ecosystems on the planet because they help to maintain biodiversity and the life cycles of so many marine species. Many large-scale mass mortality incidents linked to coral bleaching have unfortunately been reported. Coral reefs have diverse intra-class variations in their color, shape, size, and texture. The color of the corals vary significantly due to light attenuation and light scattering phenomena. The coral reef has a vital role in preserving biodiversity, ceasing coastal erosion, and promoting the business trade. However, they are declining because of over-exploitation, damaging fishery, marine pollution, global climate change, and many more. Corals seem pale when they get bleached due to climate change. Coral bleaching is the leading cause of the decline in corals. Human activities on the earth have caused a tremendous increase in carbon dioxide concentration and ultimately lead to marine ecosystem devastation that mainly includes coral reefs [1,2,3,4]. Figure 1, shows the three different types of corals which are healthy, bleached, and dead corals. Similarly, this Figure also demonstrates the impact of corals bleaching on aquatic animals life.
Recently, the government of Australia has started a research program to protect the great barrier reef. Various methods have been adopted to detect bleached corals in the literature. In [5], they proposed a satellite bleaching hotspot remote sensing technique to monitor coral bleaching. However, this method is less efficient in anomalously high temperatures. In [6], they demonstrated the framework of using radar to monitor coral bleaching, which got a significant drawback of too much equipment to be utilized on the ocean surface and very expensive. In [7], they demonstrated the use of an airborne hyper-spectral sensor to classify bleached corals. The airborne hyper-spectral sensor ranked only twenty-four points correctly out of thirty points and has a classification accuracy of 80%.
Similarly, in [8] hyper-spectral bottom index imagery is used for bottom-type classification in coral reef areas. The drawback of this technique is the need for an enormous number of samples in the dataset for achieving higher accuracy. In [9], they proposed a method of deep convolutional neural network VGG-19 for corresponding coral classification that needs a massive dataset for better accuracy.
Motivated by the marine ecosystem’s protection, this manuscript proposes a deep learning influenced vision-based technique to detect and classify bleached and unbleached corals. The accuracy of various handcrafted descriptors and deep convolutional neural networks are compared. Various hand-crafted descriptors like Local Binary Pattern (LBP) [10], Histogram of Oriented Gradient (HOG) [11], Locally Encoded Transform Feature Histogram (LETRIST) [12], Gray Level Co-occurrence Matrix (GLCM) [13], Completed Joint scale Local Binary Pattern (CJLBP) [14], Local Tetra Pattern (LTrP) [15], Non-Redundant Local Binary Pattern (NRLBP) [16] are utilized for feature extraction. Deep convolutional neural networks including AlexNet [17], ResNet-50 [18], VGG-19 [19], GoogLeNet [20], Inception v3 [21] CoralNet are being used for the purpose of feature extraction. Support Vector Machine (SVM), decision tree and k-nearest neighbor (kNN) are used as a classifier in combination with the corresponding deep learning influenced vision-based technique. This manuscript’s main contribution is the classification of the bleached and unbleached corals using visual vocabulary which is combination of spatial, texture, and color features, followed by SVM with a linear kernel.
The organization of the manuscript is carried out in the following manner. Section 2 describes the literature review and related work performed for the classification of the bleached and unbleached corals. Section 3 demonstrates the proposed methodology and provide detail information regarding feature extraction technique and classifiers. Section 4 examines the experimental results for various test cases which is followed by a conclusion section.

2. Related Work

For assessing the impact of disturbance on reefs and following subsequent recovery or decline, increasing awareness of the extent of risks facing coral reef ecosystems and monitoring operations has become crucial. Coral reefs have long been vital to the health of coastlines and tens of thousands of enterprises. Global warming, on the other hand, is becoming a serious threat to coral reefs. It has caused coral bleaching, in which stressed corals expel their symbiotic algae, potentially increasing the risk of coral morbidity and mortality. There is a serious need for more timely and cost-effective coral bleaching mapping.
Corals have distinct structures and colors. Image characteristics play a critical role in the classification of coral reefs. In [22], the normalized chromaticity coordinates (NCC) along with LBP are followed by a three-layer back-propagation neural network to detect the existence of bleached corals. This approach helps to organize five classes like coral with algae, dead coral, abiotic algae,and living coral. Nevertheless, this scheme is not successful for complex underwater images. In [23], they proposed a hybrid handcrafted and CNN model-based corals classification technique that has the capability to correctly classify healthy corals. The use of an airborne hyper-spectral sensor and other techniques are explained in [1]. The author has used this method to classify bleached corals. However, this approach gained the accuracy of 80% and was marked twenty-four points out of thirty points. Likewise, in [24], the authors have used the hyper-spectral bottom index imagery technique. This method helps in bottom-type classification in the coral reefs, but for achieving higher accuracy, this approach needs a huge number of samples present in the dataset. There is a direct relationship between the accuracy and samples, as the sample increases the accuracy increases and vice versa. In [9], another method is presented for the classification of coral reefs known as deep convolutional neural network VGG-19, but this method also requires an enormous dataset for higher accuracy. Also, in [25], the authors proposed a pseudo invariant features based technique for the detection of the bleached corals. The method achieves the maximum accuracy of 88.9%. In [26], a model to detect the scleractinian corals. The article also highlights the impact of micro-plastics on coral reefs. I. Conti-Jerpi et al. [3], performed a Bayesian analysis of carbon and nitrogen isotopes to find the overlap between corals. This articles suggests that the tropical corals can be used as resistance to the corals bleaching in coral building reefs.
Moreover, some authors have presented about the hyper-spectral imaging technology as well. In [27], they proposed a remote sensing-based technique that allows for the simultaneous examination of large reef areas in order to examine species composition and a sampling intensity of surveillance in order to assess temporal variations. In most images classification and recognition tasks, image representations generated from pre-trained deep networks outperformed handcrafted features [28]. These acquired representations are generalizable and transformable to other domains, such as underwater picture categorization.

Motivation and Contribution

After analyzing related work it is evident that there are some technique present in the literature but those techniques have limitations. In this article we propose a novel BoF based technique to locate bleached corals by using images captured by under water drones. Following are our contributions.
  • We have created a novel custom CNN named as CoralNet for the classification of bleached and unbleached corals.
  • We propose a novel Bag of Features (BoF) technique integrated with SVM to classify bleached and unbleached corals with high accuracy. BoF is a vector containing handcrafted features extracted with the help of HOG and LBP as well as spatial features extracted with AlexNet and CoralNet.
  • We also propose a novel bleached corals positioning algorithm to locate the position of bleached corals.
In the upcoming section, the proposed BoF based technique is explained.

3. Proposed Framework

The proposed BoF based framework is presented in Figure 2. Test images are captured with underwater drones [29] then passed through the ground station for obtaining the output. The model is trained with 15k images of bleached and unbleached corals of the Great Barrier Coral Reef of Australia. The SVM classifier is employed to categorize the features extracted through D-CNN and handcrafted descriptors. The basic steps involved in the proposed methodology are visually represented in Figure 3. Initially, a patch is extracted from the coral reef image. In next step, the texture and color features are extracted with the help of handcrafted descriptors while the spatial features are extracted with the help of D-CNN models. These Bags of Features (BoF) are concatenated to form visual vocabulary (VV) vector which is provided as an input to the classifier.

3.1. Explanation of Steps

Initially, an image is taken with the help of an underwater drone. In the next step, the image is segmentized and divided into small patches. Features are extracted from each patch with the help of handcrafted descriptors and D-CNNs. A visual vocabulary (VV) is created, as shown in Figure 4, this visual vocabulary is the features extracted from these features, and the training features are passed to classifier i.e., SVM, which classifies whether the VV-features are of bleached coral or healthy coral. We used different handcrafted features as well as different D-CNN’s but AlexNet shows the highest accuracy. We used different classifiers i.e., SVM, kNN, and decision tree, but SVM outperforms all other classifiers.

3.2. Feature Extraction

Handcrafted and spatial features are concatenated to obtained visual vocabularies (VV). The texture and color features are extracted with the help of handcrafted descriptors while spatial features are extracted with the D-CNN models.

3.2.1. Spatial Features

Features are extracted with handcrafted descriptors as well as D-CNN models. Initially, an image is captured with the help of underwater drones. Then the image is preprocessed and resized to the input size of D-CNN.

3.2.2. Pretrained D-CNN

In the case of AlexNet, the input image size is 227 × 227 × 3. AlexNet is an ImageNet with a total of twenty-five layers. There are five convolutional layers in AlexNet to extract the spatial features. The other layers involved in AlexNet architecture are fully connected layers, max-pooling layers, sigmoid layer, and ReLU layer. The feature vector is obtained at fully connected layer 7 (FC-7) of the AlexNet. Figure 5 illustrates the convolutional layers of AlexNet.

3.2.3. Custom D-CNN: CoralNet

We create a custom D-CNN named as CoralNet for the extraction of the spatial features. CoralNet has thirteen layers in which three layers are convolutional layers (Conv2D), two max-pooling (MaxPooling2D) layers, two dense layers, one flatten layer, three activation layers. The input layer size is kept at 227 × 227 × 3. Two activation layers have rectified linear unit (ReLU) as activation function while the last activation layer uses softmax as activation function. Features are extracted with Conv2D layers and the feature vector is extracted at the last layer. The training options used for training CoralNet are summarized in Table 1. The simplified architecture of the CoralNet is shown in Figure 6.

3.2.4. Handcrafted Features

The handcrafted features are extracted from the images using several handcrafted descriptors. The input images are preprocessed and local binary patterns, Gray Level Co-occurrence Matrix, histogram of oriented gradient, and several other texture features are extracted all these features are combined and a combined feature vector is created.

3.3. Bag of Features (BoF) and Visual Vocabulary (VV)

The spatial features and handcrafted features are concatenated and a single feature vector is created which is called bag of features (BoF). However, after applying K-means clustering the clustered BoF vector is called visual vocabulary (VV).

3.3.1. K-Means Clustering Algorithm

K-means clustering algorithm is applied to the selected features these features are color features, texture features, and spatial features. There are total eight clusters formed with the centroid is selected randomly at the start and updated with every iteration. The pseudo code of k-means clustering algorithm is given in Algorithm 1.
Algorithm 1: k-means Clustering Algorithm.
Input : Features   as   data   points
Let   features   F = { F 1 , F 2 , F 2 , . . . , F n }   is   set   of   data   points   and   C = { C 1 , C 2 , C 3 , . . . , C o }   is   set   of   centers .
1 . Randomly   select o   cluster   centers .
2 . Calculate   the   distance   between   feature   and   cluster   centers .
3 . Assign   the   feature   to   the   cluster   center   whose   distance   from   the   cluster   center   is   minimum   of   all   the   cluster   centers .
4 . Recalculate   the   new   cluster   center   using :   C i = ( 1 o i j = 1 o i F i )
where ,   o i   represents   the   number   of   features   in   ith   cluster .
5 . Recalculate   the   distance   between   each   feature   and   new   obtained   cluster   centers .
6 . If   no   feature   was   reassigned   then   stop , otherwise   repeat   from   step   3 .

3.3.2. Validation of Clusters

For the validation and evaluation of clusters obtained from the k-means clustering algorithm we use Silhouette Analysis. Silhouette analysis gives degree of separation between clusters. The pseudo code for Silhouette Analysis is given in Algorithm 2.
Algorithm 2: Silhouette Analysis.
For   each   sample
1 . Compute   the   average   distance   from   all   features   in   the   same   cluster ( α i )
2 . Compute   the   average   distance   from   all   features   in   the   closest   cluster ( β i )
3 . Compute   the   co-efficient :   S c = β i α i m a x ( α i , β i )
If   S c = 0   The   sample   is   very   close   to   the   neighboring   clusters .
If   S c = 1   The   sample   is   far   away   from   the   neighboring   clusters .
If   S c = 1   The   sample   is   assigned   to   the   wrong   clusters .

3.4. Classifier

The classifier used is SVM [30,31,32,33]. It makes classification of the objects and samples by creating a hyperplane between the objects as depicted in Figure 7. To achieve higher accuracy the margin is kept high. A high margin gives better accuracy. There are three kernels of SVM. For binary classification linear kernel of SVM proves efficient while for multi-class classification Gaussian and polynomial kernel prove effective. The equations of these kernels are Equations (1)–(3), respectively.
κ ( ξ i , ξ j ) = ξ i τ ξ j .
κ ( ξ i , ξ j ) = ( 1 + ξ i τ ξ j ) ρ .
Here ξ i and ξ j are used for calculating the dot products of both vectors, and are plotted in a space of order ρ .
κ ( ξ i , ξ j ) = e x p ( ξ i ξ j 2 ) 2 ϱ 2 .
where ξ i ξ j provides the Euclidean distance between two samples. Width of Gaussian kernel can be set by variance ϱ that controls the classifier performance.

3.5. Confusion Matrix

It is the parameter to validate the performance of a machine learning model. It also tells us the accomplishment of the classification problem. Following are some of the essential parameters of the confusion matrix.
  • True Positive ( T P ) : It is the accurate prediction of the bleached corals.
  • True Negative ( T N ) : It is the accurate prediction of the unbleached corals.
  • False Positive ( F P ) : It is the false prediction of the bleached corals.
  • False Negative ( F N ) : It is the false prediction of the unbleached corals.
  • Sensitivity ( T P R ) : It is the ratio of accurate prediction of the corals and can be given by Equation (4).
    S e n s i t i v i t y ( T P R ) = T P ( T P + F N )
  • Specificity ( S y ) : It is the ratio of the prediction of unbleached corals and can be given by Equation (5).
    S p e c i f i c i t y = T N ( T N + F P )
  • Accuracy: The ratio of correct prediction to the total number of instances can be given by Equation (6).
    A c c u r a c y = ( T N + T P ) ( F P + T P + F N + T N )
  • F1-score: It is the weighted mean of sensitivity and specificity and can be given by Equation (7).
    F 1 s c o r e = ( T P R S y ) ( T P R + S y ) 2
  • Cohen’s Kappa ( κ ) : κ gives us the amount of data by the execution of classifier for the examination of the its performance in case of some coincidence. It can be calculated by Equation (8) [34].
    κ = 2 ( T P T N F P F N ) ( T P + F P ) ( F P + T N ) + ( T P + F N ) ( F N + T N )

3.6. Dataset

The datasets used for research contain images of corals of the Great Barrier Coral Reef of Australia. Most of the images are captured with underwater drone [22]. The model is trained with 60% of the images, 20% of the images are used for validation while testing is done on the remaining 20%. The first dataset can be publicly accessed at [35]. While the other two datasets Bleached and Unbleached corals and bleached, healthy, and dead (BHD) corals dataset can be publicly accessed at [36,37], respectively. We have used a publicly available dataset having nine different classes to test the generalized performance of the model. This dataset can be easily accessed at [38]. We created bleached and unbleached corals datasets to compare the performance of the model trained on the first dataset. We also created a BHD dataset for the classification of bleached, healthy, and dead corals. We also tested our model to classify Crustose Coralline Algae (CCA), Turf algae, Macroalgae, Sand, Acropora, Pavona, Montipora, Pocillopora, and Porites. These classes of corals and non-corals are explained in detail in [39]. Figure 8 shows samples of patches extracted from the pictures of the datasets. The datasets are preprocessed to avoid over-fitting.

3.7. Bleached Corals Positioning Algorithm

We propose a novel BoF and VV based algorithm to locate the positions of bleached corals in the full picture captured by underwater drones. In this algorithm, the input image is divided into segments to create the scale of the input image pyramid. From this pyramid a patch with more than 50% is extracted. This patch acts as an input to the handcrafted as well as D-CNN models. The handcrafted and D-CNN features of this local patch are combined to create BoF vector. This BoF vector is passed through kmeans clustering to create VV vector which is given as an input to the SVM classifier. If the output of SVM is bleached corals. Then coordinates of local patch are extracted along with it’s corresponding pyramid scale. At the end, using coordinates a boundary box around bleached corals is drawn for the positioning of bleached corals. The pseudo code of algorithm is described in Algorithm 3 whereas the graphical illustration of the algorithm is shown in Figure 9.
Algorithm 3: Bleached Corals Positioning Algorithm.
Bdcc 05 00053 i001
In the next section, experimental results are presented.

4. Experimental Results

Handcrafted descriptors and D-CNN models are utilized for features extraction, while SVM is used as a corresponding classifier. Table 2, Table 3 and Table 4 show the classification of the highest accuracy, specificity, sensitivity, F1-score, and Cohen’s kappa of SVM kernels of each handcrafted descriptor and deep convolutional neural network (D-CNN) for first, second and third dataset, respectively. Whereas Figure 10 shows the performance of the different classifiers when applied on all three datasets. The performance of SVM is compared with kNN, decision tree, and random forest. It is clear that SVM performs better than kNN, decision tree, and random forest. Figure 11, shows the confusion matrices of the proposed method for first and third dataset. The performance of the model is also efficient for the multi-class classification problem. Moreover, the dataset gives identical results when applied to the other binary classification dataset as well. The model is trained one dataset and gives better performance on other dataset as well.
Experimental results show that bag of features (BoF) with a Linear kernel of SVM gives the highest accuracy compared to other combinations as arranged in Table 2 and Table 3. The linear kernel’s highest accuracy is achieved due to the classification between bleached and corresponding unbleached corals. The problem is binary and can be efficiently separated by a linear line or hyperplane. For binary type, the SVM classifier’s linear kernel is more suitable to give better results than Gaussian and polynomial kernel. Gaussian and polynomial kernel demonstrates efficient results in the case of a multi-class scenario. Table 4 shows that the proposed model also gives the highest accuracy when applied to the multi-class dataset. The results show that the recall and precision of proposed method are also more significant than the rest of the state-of-art methods. The output results obtained via positioning algorithm are shown in Figure 12.

4.1. Generalized Performance of BoF Model on Moorea Corals Dataset

We trained the model using first dataset [35]. While for generalized testing of the model we have used Moorea Corals dataset [38]. There are nine classes in this dataset four of these classes are non-corals while the remaining five classes are corals. The experimental results obtained on this dataset using conventional handcrafted techniques, pre-trained D-CNN models, and BoF based models are explained in Table 5. The proposed model achieves an accuracy of 98% for this dataset and we can say that the generalized performance of the proposed model is better than other techniques.

4.2. Bleached Corals Localization

Various bleached corals are localized via proposed algorithm. The proposed algorithm has the advantage of low execution time which is just 1ms as well as it can be implemented without GPU. The system which used for experimentation is HP core i7 with each processor of 2.6 GHz, 7th generation, and 8 GB RAM.
In the next section, a brief conclusion of all the work is presented.

5. Conclusions

The coral reefs play a vital role in preserving biodiversity, ceasing coastal erosion, and promoting business trade. Ultimately, it enhances the beauty due to its fascinating colors and shape. Coral reefs are also sheltered for many marine animals. The great barrier coral reef of Australia is one of the most beautiful among them in the world. Unfortunately, many large-scale mass mortality events associated with coral bleaching have been documented due to a variety of anthropogenic and environmental influences. This paper proposed a novel technique to resolve this problem by efficiently classifying bleached and unbleached corals. Experimental results demonstrate that bag of features (BoF) with Linear kernel of SVM classifier gives the highest accuracy of 99.08% for binary classification and 98.11% for the multi-class classification and has high precision and recall. The classification of the highest accuracy, specificity, and sensitivity of SVM kernels of each handcrafted descriptor and deep convolutional neural network is also provided and tabulated. The F1 score of the different state of the art techniques are compared and the superiority of BoF over other methods is highlighted.

Author Contributions

Conceptualization, S.J. and M.R.; methodology, S.J.; software, S.J.; validation, M.R. and A.H.; formal analysis, S.J.; investigation, M.R.; resources, M.R.; data curation, S.J.; writing—original draft preparation, S.J.; writing—review and editing, M.R.; visualization, M.R.; supervision, A.H.; project administration, A.H.; funding acquisition, M.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hedley, J.D.; Roelfsema, C.M.; Chollett, I.; Harborne, A.R.; Heron, S.F.; Weeks, S.; Skirving, W.J.; Strong, A.E.; Eakin, C.M.; Christensen, T.R.L.; et al. Remote Sensing of Coral Reefs for Monitoring and Management: A Review. Remote Sens. 2016, 8, 118. [Google Scholar] [CrossRef] [Green Version]
  2. Sully, S.; Burkepile, D.E.; Donovan, M.K.; Hodgson, G.; Woesik, R. A global analysis of coral bleaching over the past two decades. Nat. Commun. 2019, 10, 1264. [Google Scholar] [CrossRef] [Green Version]
  3. Conti-Jerpe, I.; Thompson, P.D.; Wong, C.; Oliveira, N.L.; Duprey, N.; Moynihan, M.; Baker, D. Trophic strategy and bleaching resistance in reef-building corals. Sci. Adv. 2020, 6, eaaz5443. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. DeCarlo, T.M.; Gajdzik, L.; Ellis, J.; Coker, D.J.; Roberts, M.B.; Hammerman, N.M.; Pandolfi, J.M.; Monroe, A.A.; Berumen, M.L. Nutrient-supplying ocean currents modulate coral bleaching susceptibility. Sci. Adv. 2020, 6, 5493–5499. [Google Scholar] [CrossRef] [PubMed]
  5. Fawad; Jamil Khan, M.; Rahman, M.; Amin, Y.; Tenhunen, H. Low-rank multi-channel features for robust visual object tracking. Symmetry 2019, 11, 1155. [Google Scholar]
  6. González-Rivero, M.; Beijbom, O.; Rodriguez-Ramirez, A.; Ganase, A.; Gonzalez-Marrero, Y.; Herrera-Reveles, A.; Kennedy, E.V.; Kim, C.J.; Lopez-Marcano, S.; Markey, K. Monitoring of coral reefs using artificial intelligence: A feasible and cost-effective approach. Remote Sens. 2020, 12, 489–502. [Google Scholar] [CrossRef] [Green Version]
  7. Liu, B.; Liu, Z.; Men, S.; Li, Y.; Ding, Z.; He, J.; Zhao, Z. Underwater hyperspectral imaging technology and its applications for detecting and mapping the seafloor: A review. Sensors 2020, 20, 4962. [Google Scholar] [CrossRef]
  8. Yang, B.; Xiang, L.; Chen, X.; Jia, W. An online chronic disease prediction system based on incremental deep neural network. Comput. Mater. Contin. 2021, 67, 951–964. [Google Scholar] [CrossRef]
  9. Mahmood, A.; Bennamoun, M.; Sohel, F.A.; Gary, R.; Kendrick, A.; Hovey, R.; Kendrick, G.A.; Fisher, R.B. Deep image representations for coral image classification. IEEE J. Ocean. Eng. 2019, 44, 121–131. [Google Scholar] [CrossRef] [Green Version]
  10. Tekin, R.; Ertuĝrul, Ö.F.; Kaya, Y. New local binary pattern approaches based on color channels in texture classification. Multimed Tools Appl. 2020, 79, 32541–32561. [Google Scholar] [CrossRef]
  11. Yuan, B.H.; Liu, G.H. Image retrieval based on the gradient-structures histogram. Neural Comput. Appl. 2020, 32, 11717–11727. [Google Scholar] [CrossRef]
  12. Song, T.; Li, H.; Meng, F.; Wu, Q.; Cai, J. LETRIST: Locally encoded transform feature histogram for rotation-invariant texture classification. IEEE Trans. Circuits Syst. Video Technol. 2018, 28, 1565–1579. [Google Scholar] [CrossRef]
  13. Kaya, Y.; Kuncan, M.; Kaplan, K.; Minaz, M.R.; Ertun c, H.M. A new feature extraction approach based on one-dimensional gray level co-occurrence matrices for bearing fault classification. J. Exp. Theor. Artif. Intell. 2021, 33, 161–178. [Google Scholar] [CrossRef]
  14. Nkenyereye, L.; Tama, B.A.; Lim, S. A stacking-based deep neural network approach for effective network anomaly detection. Comput. Mater. Contin. 2021, 66, 2217–2227. [Google Scholar]
  15. Murala, S.; Maheshwari, R.P.; Balasubramanian, R. Local tetra patterns: A new feature descriptor for content-based image retrieval. IEEE Trans. Image Process. 2012, 21, 2874–2886. [Google Scholar] [CrossRef] [PubMed]
  16. Bemani, A.; Baghban, A.; Shamshirband, S.; Mosavi, A.; Csiba, P.; Várkonyi-Kóczy, A.R. Applying ann, anfis, and lssvm models for estimation of acid solvent solubility in supercritical CO2. Comput. Mater. Contin. 2020, 63, 1175–1204. [Google Scholar] [CrossRef]
  17. Jamil, S.; Rahman, M.; Ullah, A.; Badnava, S.; Forsat, M.; Mirjavadi, S.S. Malicious uav detection using integrated audio and visual features for public safety applications. Sensors 2020, 20, 3923. [Google Scholar] [CrossRef]
  18. Chu, Y.; Yue, X.; Yu, L.; Sergei, M.; Wang, Z. Automatic image captioning based on ResNet50 and LSTM with soft attention. Wirel. Commun. Mob. Comput. 2020, 2020, 8909458. [Google Scholar] [CrossRef]
  19. Wazirali, R. Intrusion detection system using fknn and improved PSO. Comput. Mater. Contin. 2021, 67, 1429–1445. [Google Scholar] [CrossRef]
  20. Alsharman, N.; Jawarneh, I. Googlenet cnn neural network towards chest ct coronavirus medical image classification. J. Comput. Sci. 2020, 16, 620–625. [Google Scholar] [CrossRef]
  21. Joshi, K.; Tripathi, V.; Bose, C.; Bhardwaj, C. Robust sports image classification using inceptionv3 and neural networks. Procedia Comput. Sci. 2020, 167, 2374–2381. [Google Scholar] [CrossRef]
  22. Bennett, M.K.; Younes, N.; Joyce, K. Automating Drone Image Processing to Map Coral Reef Substrates Using Google Earth Engine. Drones 2020, 4, 50. [Google Scholar] [CrossRef]
  23. Raphael, A.; Dubinsky, Z.; Iluz, D.; Netanyahu, N.S. Neural Network Recognition of Marine Benthos and Corals. Diversity 2020, 12, 29. [Google Scholar] [CrossRef] [Green Version]
  24. Odagawa, S.; Takeda, T.; Yamano, H.; Matsunaga, T. Bottom-type classification in coral reef area using hyperspectral bottom index imagery. In Proceedings of the 2015 7th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Tokyo, Japan, 2–5 June 2015; pp. 1–4. [Google Scholar]
  25. Xu, J.; Zhao, J.; Wang, F.; Chen, Y.; Lee, Z. Detection of Coral Reef Bleaching Based on Sentinel-2 Multi-Temporal Imagery: Simulation and Case Study. Front. Mar. Sci. 2021, 8, 268. [Google Scholar] [CrossRef]
  26. Saliu, F.; Montano, S.; Leoni, B.; Lasagni, M.; Galli, P. Microplastics as a threat to coral reef environments: Detection of phthalate esters in neuston and scleractinian corals from the Faafu Atoll, Maldives. Mar. Pollut. Bull. 2019, 142, 234–241. [Google Scholar] [CrossRef] [PubMed]
  27. Lodhi, V.; Chakravarty, D.; Mitra, P. Hyperspectral imaging for earth observation: Platforms and instruments. J. Indian Inst. Sci. 2018, 98, 429–443. [Google Scholar] [CrossRef]
  28. Mahmood, A.; Bennamoun, M.; An, S.; Sohel, F.; Boussaid, F.; Hovey, R.; Kendrick, G.; Fisher, R.B. Coral classification with hybrid feature representations. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; Volume 1, pp. 519–523. [Google Scholar]
  29. Meng, L.; Hirayama, T.; Oyanagi, S. Underwater-drone with panoramic camera for automatic fish recognition based on deep learning. IEEE Access 2018, 6, 17880–17886. [Google Scholar] [CrossRef]
  30. Han, K.-X.; Chien, W.; Chiu, C.-C.; Cheng, Y.-T. Application of Support Vector Machine (SVM) in the Sentiment Analysis of Twitter DataSet. Appl. Sci. 2020, 10, 1125. [Google Scholar] [CrossRef] [Green Version]
  31. Kranjčić, N.; Medak, D.; Župan, R.; Rezo, M. Support Vector Machine Accuracy Assessment for Extracting Green Urban Areas in Towns. Remote Sens. 2019, 11, 655. [Google Scholar] [CrossRef] [Green Version]
  32. Rahman, M.H.; Shahjalal, M.; Hasan, M.K.; Ali, M.O.; Jang, Y.M. Design of an SVM Classifier Assisted Intelligent Receiver for Reliable Optical Camera Communication. Sensors 2021, 21, 4283. [Google Scholar] [CrossRef]
  33. Fan, J.; Lee, J.; Lee, Y. A Transfer Learning Architecture Based on a Support Vector Machine for Histopathology Image Classification. Appl. Sci. 2021, 11, 6380. [Google Scholar] [CrossRef]
  34. Chicco, D.; Warrens, M.J.; Jurman, G. The Matthews Correlation Coefficient (MCC) is More Informative Than Cohen’s Kappa and Brier Score in Binary Classification Assessment. IEEE Access 2021, 9, 78368–78381. [Google Scholar] [CrossRef]
  35. Shihavuddin, ASM. Coral reef dataset. Mendeley Data 2017, V2. [Google Scholar] [CrossRef]
  36. Bleached and Unbleached Corals Classification. Available online: https://www.kaggle.com/sonainjamil/bleached-corals-detection (accessed on 22 September 2021).
  37. BHD Corals. Available online: https://www.kaggle.com/sonainjamil/bhd-corals (accessed on 23 September 2021).
  38. Moorea Labeled Corals. Available online: http://vision.ucsd.edu/content/moorea-labeled-corals (accessed on 27 September 2021).
  39. Beijbom, O.; Edmunds, P.J.; Kline, D.I.; Mitchell, B.G.; Kriegman, D. Automated Annotation of Coral Reef Survey Images. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 16–21 June 2012; pp. 1170–1177. [Google Scholar]
Figure 1. Different types of corals and their impact on aquatic life.
Figure 1. Different types of corals and their impact on aquatic life.
Bdcc 05 00053 g001
Figure 2. The proposed framework for bleached corals detection.
Figure 2. The proposed framework for bleached corals detection.
Bdcc 05 00053 g002
Figure 3. The proposed framework steps visual representation.
Figure 3. The proposed framework steps visual representation.
Bdcc 05 00053 g003
Figure 4. Visual Vocabulary of features.
Figure 4. Visual Vocabulary of features.
Bdcc 05 00053 g004
Figure 5. Feature Extraction with AlexNet.
Figure 5. Feature Extraction with AlexNet.
Bdcc 05 00053 g005
Figure 6. Feature Extraction with CoralNet.
Figure 6. Feature Extraction with CoralNet.
Bdcc 05 00053 g006
Figure 7. Demonstration of SVM.
Figure 7. Demonstration of SVM.
Bdcc 05 00053 g007
Figure 8. Sample patches of images of dataset.
Figure 8. Sample patches of images of dataset.
Bdcc 05 00053 g008
Figure 9. Bleached Corals Positioning Algorithm.
Figure 9. Bleached Corals Positioning Algorithm.
Bdcc 05 00053 g009
Figure 10. Comparison of accuracies of different classifiers for all datasets.
Figure 10. Comparison of accuracies of different classifiers for all datasets.
Bdcc 05 00053 g010
Figure 11. Confusion matrices of binary class and multi-class datasets.
Figure 11. Confusion matrices of binary class and multi-class datasets.
Bdcc 05 00053 g011
Figure 12. Positioning of the bleached corals in the full coral reef image.
Figure 12. Positioning of the bleached corals in the full coral reef image.
Bdcc 05 00053 g012
Table 1. Training Parameters of CoralNet.
Table 1. Training Parameters of CoralNet.
ParameterValue
OptimizerAdam
Epochs10
Batch Size64
Loss FunctionCross Entropy
Table 2. Performance of hand-crafted descriptors and D-CNN models for first dataset.
Table 2. Performance of hand-crafted descriptors and D-CNN models for first dataset.
Technique’s NameSVM KernelSensitivitySpecificityAccuracyF1-ScoreCohen’s Kappa ( κ )
LBP [10]Polynomial70.1%75.9%71.8%0.7290.731
HOG [11]Linear66.3%69.3%67.1%0.6780.663
LETRIST [12]Linear56.2%59.7%56.6%0.5790.594
GLCM [13]RBF66.2%75.1%69.3%0.7040.732
GLCM [13]Polynomial73.1%80.4%76.7%0.7660.751
CJLBP [14]Linear71.2%77.3%72.7%0.7410.743
LTrP [15]Linear48.4%50.2%49.1%0.4930.524
AlexNet [17]Linear94.1%96.3%95.2%0.9520.966
ResNet-50 [18]Linear92.2%96.4%94.5%0.9420.952
VGG-19 [19]Linear92.1%92.1%92.2%0.9210.851
GoogleNet [20]Linear85.1%93.1%88.2%0.8890.873
Inceptionv3 [21]Linear77.1%92.3%83.3%0.8400.862
CoralNet92.1%97.3%95.0%0.9500.962
BoFLinear99.1%99.0%99.08%0.9950.982
Table 3. Performance of hand-crafted descriptors and D-CNN models for second dataset (Bleached and Unbleached Corals).
Table 3. Performance of hand-crafted descriptors and D-CNN models for second dataset (Bleached and Unbleached Corals).
Technique’s NameSVM KernelSensitivitySpecificityAccuracyF1-ScoreCohen’s Kappa ( κ )
LBP [10]Quadratic70.56%70.56%70.60%0.7060.411
HOG [11]Linear94.64%94.40%94.40%0.9450.889
LETRIST [12]Linear58.2%61.7%58.6%0.5990.534
GLCM [13]RBF69.2%78.1%72.3%0.7140.702
GLCM [13]Cubic72.1%81.2%77.3%0.7560.731
CJLBP [14]Linear73.2%75.3%73.7%0.7510.723
LTrP [15]Linear50.2%53.2%51.1%0.5290.506
AlexNet [17]Linear97.78%97.78%97.80%0.9780.956
ResNet-50 [18]Linear98.91%98.89%98.90%0.9890.978
VGG-19 [19]Linear94.3%94.3%94.5%0.9430.884
GoogleNet [20]Linear93.33%93.33%93.33%0.9330.867
Inceptionv3 [21]Linear95.56%95.56%95.60%0.9560.911
CoralNet92.1%97.3%95.0%0.9500.962
BoFLinear99.2%98.9%99.0%0.9850.984
Table 4. Performance of hand-crafted descriptors and D-CNN models for third dataset (Bleached, healthy, and dead (BHD) Dataset).
Table 4. Performance of hand-crafted descriptors and D-CNN models for third dataset (Bleached, healthy, and dead (BHD) Dataset).
Technique’s NameClassifierSensitivitySpecificityAccuracyF1-ScoreCohen’s Kappa ( κ )
LBP [10]SVM69.3%71.4%69.8%0.6890.691
HOG [11]SVM74.42%60.05%75.2%0.6650.621
LETRIST [12]SVM55.3%58.5%55.4%0.5690.584
GLCM [13]SVM65.2%74.1%68.3%0.6940.722
CJLBP [14]SVM70.2%76.3%71.7%0.7310.733
LTrP [15]SVM47.4%49.2%48.1%0.4830.514
AlexNet [17]SVM86.37%83.73%92.20%0.8500.826
ResNet-50 [18]SVM85.43%85.80%92.60%0.8560.852
VGG-19 [19]SVM82.1%82.1%82.2%0.8210.781
GoogleNet [20]SVM80.55%80.51%88.60%0.8050.803
Inceptionv3 [21]SVM81.10%76.44%86.30%0.7870.761
CoralNet91.1%96.3%94.0%0.9400.952
BoFSVM98.1%98.0%98.11%0.9850.972
Table 5. Generalized Performance on Moorea Corals Dataset.
Table 5. Generalized Performance on Moorea Corals Dataset.
Technique’s NameClassifierSensitivitySpecificityAccuracyF1-ScoreCohen’s Kappa ( κ )
LBP [10]SVM67.5%70.2%67.8%0.6760.683
HOG [11]SVM75.37%61.15%76.35%0.6650.634
LETRIST [12]SVM56.50%59.63%56.56%0.5850.591
GLCM [13]SVM64.21%73.89%67.24%0.6830.710
CJLBP [14]SVM72.54%78.65%73.45%0.7520.753
LTrP [15]SVM46.39%49.89%48.45%0.4760.503
AlexNet [17]SVM90.13%91.84%93.80%0.9100.916
ResNet-50 [18]SVM88.53%93.90%93.23%0.8930.891
VGG-19 [19]SVM85.70%85.70%85.80%0.8580.803
GoogleNet [20]SVM90.85%90.61%94.30%0.9070.901
Inceptionv3 [21]SVM86.52%83.39%90.81%0.8650.878
CoralNet91.1%96.3%94.0%0.9400.952
BoFSVM98.07%98.10%98.09%0.9830.970
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jamil, S.; Rahman, M.; Haider, A. Bag of Features (BoF) Based Deep Learning Framework for Bleached Corals Detection. Big Data Cogn. Comput. 2021, 5, 53. https://doi.org/10.3390/bdcc5040053

AMA Style

Jamil S, Rahman M, Haider A. Bag of Features (BoF) Based Deep Learning Framework for Bleached Corals Detection. Big Data and Cognitive Computing. 2021; 5(4):53. https://doi.org/10.3390/bdcc5040053

Chicago/Turabian Style

Jamil, Sonain, MuhibUr Rahman, and Amir Haider. 2021. "Bag of Features (BoF) Based Deep Learning Framework for Bleached Corals Detection" Big Data and Cognitive Computing 5, no. 4: 53. https://doi.org/10.3390/bdcc5040053

Article Metrics

Back to TopTop