Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Comparison of Thermal Neutron and Hard X-ray Dark-Field Tomography
Previous Article in Journal
High-Profile VRU Detection on Resource-Constrained Hardware Using YOLOv3/v4 on BDD100K
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning and Handcrafted Features for Virus Image Classification

Dipartimento di Ingegneria Dell’informazione, University of Padova, via Gradenigo 6, 35131 Padova, Italy
*
Author to whom correspondence should be addressed.
J. Imaging 2020, 6(12), 143; https://doi.org/10.3390/jimaging6120143
Submission received: 15 November 2020 / Revised: 12 December 2020 / Accepted: 18 December 2020 / Published: 21 December 2020

Abstract

:
In this work, we present an ensemble of descriptors for the classification of virus images acquired using transmission electron microscopy. We trained multiple support vector machines on different sets of features extracted from the data. We used both handcrafted algorithms and a pretrained deep neural network as feature extractors. The proposed fusion strongly boosts the performance obtained by each stand-alone approach, obtaining state of the art performance.

1. Introduction

Recognizing and classifying viruses is fundamental to the medical field for both diagnosis and research. Since this task requires highly qualified medical-staff, there is a growing interest in making this process automatic. Such images can be acquired using electronic microscopy, which are currently not used in clinical practice, and that could be an innovative diagnostic tool.
The main difficulty in classifying viruses is their large number, due to the introduction of the DNA sequencing technique that made the number of classified viruses grow exponentially.
Besides, there are other factors that make the creation of an accurate virus taxonomy very complex: their replication and genetic heritage.
The availability of virus datasets acquired using transmission electron microscopy allowed the computer vision research community to look for methods for the automatic classification. This task has been widely investigated over the last decades, and the approaches to its solution followed the same path of the computer vision research. At first, researchers tried to use methods based on training classifiers on descriptors extracted using handcrafted algorithms [1,2,3,4,5,6]. However, the increasing popularity of deep learning led to the training of end-to-end methods that used convolutional networks [7].
In general, the application of machine learning in the field of virus classification and in medicine in general is still evolving and reaching higher and higher accuracies: as new machine learning techniques becoming increasingly reliable, they can become able to replace highly skilled personnel in these tasks.
In this paper, we create a large and effective ensemble of descriptors for virus classification from transmission electron microscopy, combining both handcrafted features and deep learning.
A large set of handcrafted features is used for training a set of support vector machines (SVM), which are then combined by sum rule; another SVM is trained using the features extracted by the last average pooling layer of DenseNet201 neural networks pre-trained on ImageNet and then tuned on the virus images.
The experimental results confirm the validity of the proposed method, since our proposed ensemble obtains state of the art performance in the tested dataset. The MATLAB code used in this study is available at https://github.com/LorisNanni.
The rest of the paper is organized as follows. In Section 2, we summarize some of the most relevant papers in the field. In Section 3, we describe the different approaches used to feed support vector machine (SVM) classifiers. In Section 4, the experimental results are reported and in Section 5, some conclusions are drawn.

2. Related Work

One of the first applications of computer vision aimed at the study of viruses is described in [1], in which ring filters are used for feature extraction on a dataset containing images of four types of viruses. The method proposed in [1] uses a Bayesian classifier with an overall accuracy of 98% in the worst case, although on a simple dataset.
In [2], researchers tried to use the features extracted from local binary patterns (LBP) and local directional patterns (LDP) to train a random forest. They obtained an average error of 13% on the same dataset used in this paper. In [3], a new feature extractor for polyomavirus images is introduced in order to automate its segmentation process.
In 2015, innovative feature descriptors such as multi quinary, edge and bag of features, which we shall explain and test below, were proposed in [4] to improve the performance of the FUSION ensemble described in [5], leading to a higher accuracy. The following year, Wen et al. proposed the combination of multi-principal component analysis (PCA) and multi-completed local binary patterns (LBP) and obtained an overall accuracy of 86.20% on the same dataset used in this study [6].
More recent studies proposed the application of transfer learning using pre-trained convolutional neural networks (SqueezeNet, DenseNet, ResNet, and InceptionV3) [7] obtaining very promising results.
In 2020, in [8], Backes et al. proposed the application of various extractors (such as Gabor wavelets, randomized neural network signature, and others) on the same dataset used in this work with promising results: the combination of all these methods led to an average accuracy of 87.27%.
Still in 2020, Wen et al. applied a method based on the use of filtering images through principal component analysis and Gaussian filters, obtaining an overall accuracy of 88%, which was the state of the art performance when the paper was published.

3. The Proposed Method

This section describes the methods tested in this paper and those used in previous works. Every method returns a feature vector that describes the image and a support vector machine classifies it. Finally, we provide a description of the deep learning approach. To avoid misunderstandings on parameter values, the code of each descriptor is available on https://github.com/LorisNanni.

3.1. Method Description and SVMs

We use a large number of texture descriptors available in the literature. We then train a SVM on these descriptors and we evaluate their stand-alone performance.
SVMs are classical tools in machine learning. A SVM is a trainable classifier that finds a separating hyperplane among the samples in a high dimensional space by maximizing the distance between data points and the hyperplane. They are used in two label classification problems, however, they can be generalized to n class problems by training a set of SVMs, each of which is trained to detect whether a samples belongs to a given class. Each SVM returns a probability that a sample belongs to that specific class. The output of the set of SVMs is the one with the largest probability. We refer the reader to [9] for a more detailed introduction to SVMs.
We evaluate the accuracy reached by the ensemble of those SVMs. We combine the single results of the SVMs using the sum rule, which consists in summing all the output probability vectors of the SVMs and defining the output class of a sample as the one with the highest sum of the scores. Besides, we also train a SVM on the features extracted by a deep learning architecture. We shall now describe these methods.

3.2. Texture Descriptors Tested in This Paper

Here, we shall describe the texture descriptors that we used in this paper and the ones used in the papers that we compare with.

3.2.1. Local Binary Pattern (LBP)

LBP operator provides a descriptor for any image using the gray levels of each pixel. In its classic version taken, a pixel is considered the 8 adjacent to him, thus creating a square of 3 × 3 pixels.
Then, for each of these 8 neighbors, their relationship with the central pixel is evaluated: if their grey level is greater than that of the central pixel, they are replaced by a 1, otherwise by a 0.
The resulting binary pattern is then converted to a decimal number. The LBP operator is thus evaluated:
L B P ( x c , y c ) = p P 2 p q ( i p i c ) ,
where P is a structure that describes pixels close to the central one, ic and ip are respectively the grey levels of the central pixel and its p-th neighbor, and q(z) is a quantization function defined as:
q ( z ) = { 1 , z 0 0 , o t h e r w i s e .
It is worth noticing that the number of neighbors and the radius are parameters that can be modified at will and are not static.

3.2.2. Discrete Local Binary Pattern (DLBP)

Described by Takumi Kobayashi [10], this algorithm is a modified version of local binary pattern.
The idea is to find, for each pixel patch, the best threshold that divides the pixels inside it. This threshold is obtained by minimizing a residual error calculated as such:
ε ( τ ) = 1 N { i | I ( r i ) τ ( I ( r i ) μ 0 ) 2 + i | I ( r i ) > τ ( I ( r i ) μ 1 ) 2 } .
With:
μ 0 = 1 N 0 i | I ( r i ) τ I ( r i ) a n d N 0 = i q ( τ I ( r i ) ) ,
μ 1 = 1 N 1 i | I ( r i ) > τ I ( r i ) a n d N 1 = i q ( I ( r i ) τ ) .
The residual error ε(τ) is then configured as the intra-class variance σ2W. The best threshold is the one that maximizes the variance between classes σ2B of the pixels in the patch calculated as such:
σ B 2 = σ σ W 2 .
Once this threshold is found, the weight of each pixel patch on the final histogram is calculated:
ω = σ B 2 ( γ * ) σ 2 + C ,
where C is a constant that serves to handle those cases where σ2 is close to zero and may cause the weight of the vote to fluctuate too much. In Kobayashi’s paper, it is placed at 0.012. The feature is extracted with (radius, neighbors) = (1, 8), (2, 8), (3, 8).

3.2.3. Sorted Consecutive Local Binary Pattern (scLBP)

The sorted consecutive local binary pattern (scLBP) algorithm has been described by Jongbin Ryu et al. during 2015 in [11].
This particular approach tries to solve some inconsistencies in the rotation invariant local binary pattern. In scLBP, from each pixel patch are extracted four components: SCLBP_S, SCLBP_M+, SCLBP_M−, SCLBP_C.
These components are then encoded by counting how much consecutive 0 s and 1 s there are and saving the result in two different array that will be sorted and concatenated.
For example, the pattern {00111010} will be encoded in {3, 1, 0} and {2, 1, 1} and then in {3, 1, 0, 2, 1, 1}. After this step, the histogram is obtained through dictionary learning with kd-tree on the raw feature of each pixel of the image, however, in our study, we used k-means with 255 clusters instead of the kd-tree.
Center pixel is a 16 element cell vector, each cell contains a matrix of centroids for each possible radius (for radius equal to 1 to 4 in steps of 0.2, so 16 elements).

3.2.4. Attractive Repulsive Center Symmetric Local Binary Pattern (ARCSLBP)

ARCSLBP is a LBP variant proposed by El Merabet et al. [12], from a grey scale portion of image, it considers four triplets around the center pixel (Figure 1a) and average local gray level (ALGL), average global grey level (AGGL), and the mean value of the considered neighbors.
As can be seen in Figure 1b, the pattern of single ACSLBP is shorter than standard LBP; in the 8 neighbors case, the LBP method has as a result, a histogram with 28 possible patterns, ACSLBP only 27.
The final ARCSLBP length is 2^8 (2^7 ACSLBP + 2^7 RCSLBP), which is the same of LBP but with improved performance as proved by El Merabet et al. [12].
In this paper, ARCSLBPrn is proposed with the variant of radius and neighbors, like standard LBP, the number of triplets can go up to half the neighbors (for example 8 triplets in 16 neighbors case), also average local gray level (ALGL) and average neighbors value are affected. The final histogram length is 2^(half of the neighbors + 3 pixel means). The feature is extracted with (radius, neighbors) = (1, 8), (2, 8), (3, 8).

3.2.5. Sigma Attractive Repulsive Center Symmetric Local Binary Pattern (sigmaARCSLBP)

Alpaslan et al. [13] merged the effectiveness of ARCSLBP with the Hessian matrix directional derivative information:
H ( I ( x , y ) ) = [ I x x = d 2 I ( x , y ) d x 2 I x y = d 2 I ( x , y ) d x d y I y x = d 2 I ( x , y ) d x d y I y y = d 2 I ( x , y ) d y 2 ] ,
G x x = 1 2 π σ 4 ( x 2 σ 2 1 ) e ( x 2 + y 2 2 σ 2 ) G y y = 1 2 π σ 4 ( y 2 σ 2 1 ) e ( x 2 + y 2 2 σ 2 ) ,
I x x = I G x x I y y = I G y y M a g = I x x 2 + I y y 2 .
The magnitude information of the Hessian matrix with different σ variance values is used in the ACS-LBP method with variable radius and neighbors. The magnitude is useful to identify intense or flat portions of image (Figure 2). The feature is extracted with (radius, neighbors) = (1, 8), (2, 8), (3, 8).

3.2.6. Alpha Local Binary Pattern (alphaLBP)

Proposed by Kaplan et al. [14], the alpha LBP operator calculates the value of each pixel based on an angle value α. The code is the same of standard LBP, although the neighbors considered are on a line as shown in Figure 3a.
This method has similar performance with standard LBP and the same pattern length, but it is helpful to find image micro pattern. Like in LBP from output image (Figure 3b), a histogram of pattern frequency is extracted. In this study, we use an 8 neighbors line.

3.2.7. Heterogeneous Auto-Similarities of Characteristics (HASC)

HASC [15] combines linear relations by covariances (COV) and nonlinear associations with entropy combined with mutual information (EMI) of heterogeneous dense feature maps.
Covariance matrices are used as descriptors because they are low in dimensionality, robust to noise and, the covariance among two features describes the features of the joint PDF.
The entropy (E) of a random variable measures the uncertainty of its value, and the mutual information (MI) of two random variables captures their generic linear and nonlinear dependencies. HASC divides the image in portions, it calculates the EMI matrix and then concatenates the vectorized EMI and COV.

3.2.8. Local Concave Micro Structure Pattern (LCvMSP)

Differently from LBP, LCvMSP (Merabet et al. [16]) calculates the relation between center pixel and neighbors mathematically.
This method uses the median value of the 3 × 3 neighbors and the entire image in both normal e grayscale value, these new two statistical triplets are used in LCvMSP.
Adding these two extra bits to concave binary thresholding function 1023 different pattern result, with improved statistical information (Alpasan et al. [17]).

3.2.9. JET Texton Learning

This method extract six derivative of Gaussian (DtGs) for every pixel forming a six dimensions feature vector (jet vector), then k-means clustering is used to construct a jet texton dictionary (Roy et al. [18] for exhaustive information). The resulted is a histogram of the jet texton. Jet has excellent classification performance in large datasets.

3.2.10. Adaptive Hybrid Patterns (AHP)

Introduced by Zhu et al. [19], it combines a hybrid texture model (HTD) and adaptive quantization algorithm (AQA). HTD is composed of local primitive features and global spatial structure and AQA improves noise robustness.

3.3. Texture Descriptors Proposed in the Literature

A selection of descriptors from Santos et al. [4].

3.3.1. Local Ternary Pattern (LTP)

LTP (Tan and Triggs [20]) is a simple LBP variant in which a threshold is set in the comparison, this helps with different light conditions in a uniform area and provides better discrimination power, this allows a certain amount of noise before binarizing between the neighbor and the central pixel (Santos et al. [4]).
For every couple of pixel considered, 3 values are possible (function s(x)), with 0 if the difference is minor of the threshold τ, with a resulting 3^(number of neighbors) histogram length. Santos decodes to divide the LBP + e LBP-parts in two 2^(number of neighbors) histograms and then to concatenate them:
s ( x ) = { 1 , x τ 0 , τ x < τ 1 , o t h e r w i s e .

3.3.2. Local Phase Quantization (LPQ)

LPQ (proposed by Ojansivu [21]) uses the blur invariance property of the Fourier phase spectrum. It considers a rectangular neighborhood where it computes the 2D short term Fourier transform (STFT) to extract local phase information.

3.3.3. Rotation Invariant Co-Occurrence among Adjacent LBP

RI (Nosaka, Ohkawa, and Fukui [22]) is a version of LBP that considers the spatial co-occurrence of the feature codes.
Each occurrence pair is labeled by the binary code and a spatial vector which connects the two center pixels. Radius and neighbors are fixed as Santos et al. [4].

3.3.4. Local Binary Pattern Histogram Fourier

LHF (Ahonen et al. [23]) uses the rotation of the LBP neighbors of an angle, which is a multiple of 360/P where P is the number of neighbors.
The algorithm uses the fact that if the image is rotated, then the neighbors will be rotated by the same angle. Radius and neighbors are variable (Santos et al. [4]).

3.3.5. Dense LBP (DLBP)

DLBP (Ylioinas et al. [24]) use the same neighbors of LBP, plus the corners between centered on the corner between center pixels.
It results in longer histogram to avoid noise. Radius and neighbors are variable (Santos et al. [4]).

3.3.6. Multi Quinary Coding (MQC)

This method extends LTP function s(x) with 2 threshold τ, θ there are 5 possible outputs:
f ( x ) = { 2 , x τ 1 , θ x < τ 0 , 0 x < θ 1 , θ x < 0 2 , o t h e r w i s e .
Like LTP, the labels are split into 4 binary patterns to reduce verbosity (Paci et al. [25]).

3.3.7. Edge (ED)

The idea behind the ED method is to focus on the important portions of an image and apply a LBP like approach. Those salient regions are the edge and non-edge and we find them where the gradient function is higher. ED is used to create an ensemble of LBP like methods, for details read Santos et al. [4].

3.3.8. Difference of Gaussian (DoG)

DoG indicates the convolution of the original image with a 2D dog filter obtained by the subtraction of two images blurred by different Gaussian kernel with different variance, the result is similar to a band pass filter, for details read Santos et al. [4].

3.3.9. Bag of Feature (BoF)

BoF divides the images into regions and then it extracts different features from them, in order to build a visual vocabulary. From a new image, feature vectors are extracted and assigned to the nearest matching terms from the vocabulary, for details read Santos et al. [4].

3.4. Deep Learning Approach

Deep learning and neural networks revolutionized the field of machine learning. They can be implemented using several different algorithms, all of which consist of a cascade of many processing layers organized in a hierarchical structure.
Each of these layers adds a level of abstraction to the overall representation. In the image interpretation task, layers close to the input deal with low-level features like edges and texture.
These low-level features can be combined together to build a more complex representation. Layer by layer, complexity increases.
The approach to deep learning considered in this paper is based on convolutional neural networks (CNNs) [26].
In this paper, we use a version of DenseNet201 [27] pretrained on ImageNet. DenseNet is an evolution of ResNet which includes dense connections among layers: each layer is connected to each following layer in a feed-forward fashion. Therefore, the number of connections increases from the number of layers L to L × (L + 1)/2. DenseNet improves the performance of previous models at the cost of an augmented computation requirement. It accepts images of 224 × 224 pixels.
We used the following hyperparameters for training: 50 training epochs, mini-batch size of 30 observations, learning rate of 0.001.
As data augmentation protocol, we independently reflect the images in both the left-right and the top-bottom directions with 50% probability. We also linearly scale the image along both axes by two random numbers in [1,2].
We use a trained Densenet as feature extractor and we take the last average pooling layer as the output of our network. We then train a SVM on these extracted features.

4. Results

As already mentioned in the introduction, the dataset used can be found at the following link: http://www.cb.uu.se/_Gustaf/virustexture/ and is described by Kylberg et al. [2]. It contains 1500 transmission electron microscopy (TEM) images of size 41 × 41 of viruses belonging to 15 different species (specifically: Adenovirus, Astrovirus, CCHF, Cowpox, Dengue, Ebola, Influenza, Lassa, Marburg, Norovirus, Orf, Papilloma, Rift Valley, Rotavirus, Westnile). We see some examples in Figure 3.
In Table 1, we report the performance of the texture descriptors here tested. Clearly, the performance is very different considering different descriptors.
We now report the results obtained by Densenet and by the fusion of multiple methods. In order to combine the results of multiple classifiers, we use the sum rule, i.e., we sum the output scores of all the classifiers in the ensemble and then the class selected by the ensemble is the one with the highest sum of the scores.
In Table 2, we report the accuracies of following ensemble methods:
-
NewSet consists of the sum rule among the methods reported in Table 1. It is interesting to note that the fusion strongly outperforms the stand-alone approaches.
-
OLD is the previous set proposed in Santos et al. [4].
-
HandC is the fusion by sum rule among the handcrafted methods of NewSet and OLD. This ensemble does not boost the performance of NewSet significantly.
-
DeepL is the SVM trained using the last average pooling layer as input. Notice that using DenseNet201 as classifier, a lower 78.93% accuracy is obtained.
-
HandC+DeepL is the sum rule between HandC and DeepL. Before the fusion, the scores of HandC and DeepL are normalized to mean 0 and standard deviation 1. This is because the number of classifiers in HandC and DeepL are different.
Finally, in Table 3, we compare our approach with other methods already reported in the literature using the same dataset and the same testing protocol.
In [2], the results in RDPF e RDPF + LBPF are obtained on the fixed scale version of our dataset that is not available online so a comparison is not possible. In the fixed version image, 1 pixel corresponds to 1 nanometer, the viruses in this study have a diameter from 25 to 270 nm, object scale image is always resized to 41 × 41 pixels.
Our ensemble obtains performance comparable with the best result already published. We want to remark that our Densenet does not reach the accuracy reported in [7].

5. Conclusions

In this paper, we proposed an ensemble of handcrafted descriptors and deep learning features for virus classification acquired using transmission electron microscopy.
For each descriptor, a different SVM is trained. We combined several sets of SVMs by sum rule. Our largest ensemble obtains state of the art performance on a very competitive dataset. This shows that combining handcrafted descriptors and deep learning features allows to boost the performance that can be obtained using only handcrafted descriptors or deep learning.
The MATLAB code of the proposed ensemble is available at https://github.com/LorisNanni.

Author Contributions

L.N. conceived of the presented idea. L.N., E.D.L. and M.L.F. performed the experiments and E.D.L., M.L.F. and G.M. wrote the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors thank NVIDIA Corporation for supporting this work by donating a Titan Xp GPU. The authors would like to thank Gustaf Kylberg for sharing the virus dataset and his MATLAB code.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Matuszewski, B.; Shark, L.-K. Hierarchical Iterative Bayesian Approach to Automatic Recognition of Biological Viruses in Electron Microscope Images. In Proceedings of the 2001 International Conference on Image Processing (Cat.No.01CH37205), Thessaloniki, Greece, 7–10 October 2001. [Google Scholar] [CrossRef]
  2. Kylberg, G.; Uppström, M.; Sintorn, I.-M. Virus Texture Analysis Using Local Binary Patterns and Radial Density Profiles. In Proceedings of the 16th Iberoamerican Congress conference on Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, Pucón, Chile, 15–18 November 2011; pp. 573–580. [Google Scholar] [CrossRef] [Green Version]
  3. Proença, M.D.C.; Nunes, J.F.; De Matos, A.P.A. Texture Indicators for Segmentation of Polyomavirus Particles in Transmission Electron Microscopy Images. Microsc. Microanal. 2013, 19, 1170–1182. [Google Scholar] [CrossRef]
  4. Dos Santos, F.L.C.; Paci, M.; Nanni, L.; Brahnam, S.; Hyttinen, J. Computer Vision for Virus Image Classification. Biosyst. Eng. 2015, 138, 11–22. [Google Scholar] [CrossRef]
  5. Nanni, L.; Paci, M.; Brahnam, S.; Ghidoni, S.; Menegatti, E. Virus image classification using different texture descriptors. In Proceedings of the 14th International Conference on Bioinformatics and Computational Biology (BIOCOMP’13), Las Vegas, NV, USA, 22–25 July 2013. [Google Scholar]
  6. Wen, Z.; Li, Z.; Peng, Y.; Ying, S. Virus Image Classification Using Multi-Scale Completed Local Binary Pattern Features Extracted from Filtered Images by Multi-Scale Principal Component Analysis. Pattern Recognit. Lett. 2016, 79, 25–30. [Google Scholar] [CrossRef]
  7. De Geus, A.R.; Backes, A.; Souza, J. Variability Evaluation of CNNs Using Cross-Validation on Viruses Images. In Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Valletta, Malta, 27–29 February 2020. [Google Scholar] [CrossRef]
  8. Backes, A.R.; Junior, J.J.D.M.S. Virus Classification by Using a Fusion of Texture Analysis Methods. In Proceedings of the 2020 International Conference on Systems, Signals and Image Processing (IWSSIP), Niteroi, Brazil, 3–5 June 2020. [Google Scholar] [CrossRef]
  9. Hearst, M.A.; Dumais, S.; Osuna, E.; Platt, J.; Scholkopf, B. Support vector machines. IEEE Intell. Syst. 1998, 13, 18–28. [Google Scholar] [CrossRef] [Green Version]
  10. Kobayashi, T. Discriminative Local Binary Pattern for Image Feature Extraction. In Computer Vision—ECCV 2020; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2015; Volume 9256, pp. 594–605. [Google Scholar]
  11. Ryu, J.; Hong, S.; Yang, H.S. Sorted Consecutive Local Binary Pattern for Texture Classification. IEEE Trans. Image Process. 2015, 24, 2254–2265. [Google Scholar] [CrossRef] [PubMed]
  12. El Merabet, Y.; Ruichek, Y.; El Idrissi, A. Attractive-and-repulsive center-symmetric local binary patterns for texture classification. Eng. Appl. Artif. Intell. 2019, 78, 158–172. [Google Scholar] [CrossRef]
  13. Alpaslan, N.; Hanbay, K. Multi-Resolution Intrinsic Texture Geometry-Based Local Binary Pattern for Texture Classification. IEEE Access 2020, 8, 54415–54430. [Google Scholar] [CrossRef]
  14. Kaplan, K.; Kaya, Y.; Kuncan, M.; Ertunc, H.M. Brain tumor classification using modified local binary patterns (LBP) feature extraction methods. Med. Hypotheses 2020, 139, 109696. [Google Scholar] [CrossRef] [PubMed]
  15. Biagio, M.S. Heterogeneous auto-similarities of characteristics (hasc): Exploiting relational information for classification. In Proceedings of the IEEE Computer Vision (ICCV13), Sydney, Australia, 1–8 December 2013; pp. 809–816. [Google Scholar]
  16. El Merabet, Y.; Ruichek, Y. Local Concave-and-Convex Micro-Structure Patterns for texture classification. Pattern Recognit. 2018, 76, 303–322. [Google Scholar] [CrossRef]
  17. Alpaslan, N.; Hanbay, K. Multi-Scale Shape Index-Based Local Binary Patterns for Texture Classification. IEEE Signal Process. Lett. 2020, 27, 660–664. [Google Scholar] [CrossRef]
  18. Roy, S.K.; Ghosh, D.K.; Dubey, S.R.; Bhattacharyya, S.; Chaudhuri, B.B. Unconstrained texture classification using efficient jet texton learning. Appl. Soft Comput. 2020, 86, 105910. [Google Scholar] [CrossRef]
  19. Zhu, Z.; You, X.; Chen, C.P.; Tao, D.; Ou, W.; Jiang, X.; Zou, J. An adaptive hybrid pattern for noise-robust texture analysis. Pattern Recognit. 2015, 48, 2592–2608. [Google Scholar] [CrossRef]
  20. Tan, X.; Triggs, B. Enhanced Local Texture Feature Sets for Face Recognition under Difficult Lighting Conditions. IEEE Trans. Image Process. 2010, 19, 1635–1650. [Google Scholar] [PubMed] [Green Version]
  21. Ojansivu, V.; Heikkilä, J. Blur Insensitive Texture Classification Using Local Phase Quantization. In Image and Signal Processing. ICISP 2008. Lecture Notes in Computer Science; Elmoataz, A., Lezoray, O., Nouboud, F., Mammass, D., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; Volume 5099. [Google Scholar] [CrossRef] [Green Version]
  22. Nosaka, R.; Ohkawa, Y.; Fukui, K. Feature Extraction Based on Co-occurrence of Adjacent Local Binary Patterns. In Proceedings of the 5th Pacific Rim Conference on Advances in Image and Video Technology—Volume Part II, Gwangju, Korea, 20–23 November 2011; pp. 82–91. [Google Scholar]
  23. Ahonen, T.; Matas, J.; Chu, H.; Pietikäinen, M. Rotation Invariant Image Description with Local Binary Pattern Histogram Fourier Features. In Proceedings of the 16th Scandinavian Conference, SCIA 2009, Oslo, Norway, 15–18 June 2009; pp. 61–70. [Google Scholar]
  24. Ylioinas, J.; Hadid, A.; Guo, Y.; Pietikäinen, M. Efficient Image Appearance Description Using Dense Sampling Based Local Binary Patterns. In Computer Vision—ACCV 2012 Lecture Notes in Computer Science; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2013; Volume 7726, pp. 375–388. [Google Scholar]
  25. Paci, M.; Nanni, L.; Lahti, A.; Aalto-Setälä, K.; Hyttinen, J.; Severi, S. Non-Binary Coding for Texture Descriptors in Sub-Cellular and Stem Cell Image Classification. Curr. Bioinform. 2013, 8, 208–219. [Google Scholar] [CrossRef]
  26. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Adv. Neural Inf. Process. Syst. 2012, 1–9. [Google Scholar] [CrossRef]
  27. Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar] [CrossRef] [Green Version]
  28. Wen, Z.-J.; Liu, Z.-H.; Zong, Y.-C.; Li, B. Latent Local Feature Extraction for Low-Resolution Virus Image Classification. J. Oper. Res. Soc. China 2018, 8, 117–132. [Google Scholar] [CrossRef]
Figure 1. El Merabet et al. [12] representation of Attractive Repulsive Center Symmetric Local Binary Pattern: (a) Example of possible triplets center symmetric; (b) Example of various methods patterns.
Figure 1. El Merabet et al. [12] representation of Attractive Repulsive Center Symmetric Local Binary Pattern: (a) Example of possible triplets center symmetric; (b) Example of various methods patterns.
Jimaging 06 00143 g001
Figure 2. Example of magnitude with Gaussian standard deviation σ = 1 (Lassa Virus).
Figure 2. Example of magnitude with Gaussian standard deviation σ = 1 (Lassa Virus).
Jimaging 06 00143 g002
Figure 3. Kaplan et al. [14] proposed method. (a) Angle based neighbors. (b) Example of application on a TEM image of the influenza virus with various angles.
Figure 3. Kaplan et al. [14] proposed method. (a) Angle based neighbors. (b) Example of application on a TEM image of the influenza virus with various angles.
Jimaging 06 00143 g003
Table 1. Accuracy of the texture descriptors.
Table 1. Accuracy of the texture descriptors.
JETscLBPAHPHASCGradient + ARCSLBPARCSLBPAlphaLBPSigmaARCSLBPDLBPLCvMSP
58.9369.4776.6068.4061.0079.9364.1375.4070.3364.67
Table 2. Performance of the ensemble.
Table 2. Performance of the ensemble.
NewSetOLDHandCDeepLHandC + DeepL
85.4085.6786.1386.4089.47
Table 3. Comparison with the literature on the object scale dataset.
Table 3. Comparison with the literature on the object scale dataset.
Here 2020DenseNet
[7] 2020
PCA
[28] 2018
Fusion
[8] 2020
LBPF
[2] 2011
Fixed Scale
RDPF
[2] 2011
RDPF + LBPF [2] 2011 Fixed
+ Object Scale
MPMC
[6] 2016
NewF
[4] 2015
89.4789.0088.0087.2779.0078.0087.0086.2085.70
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Nanni, L.; De Luca, E.; Facin, M.L.; Maguolo, G. Deep Learning and Handcrafted Features for Virus Image Classification. J. Imaging 2020, 6, 143. https://doi.org/10.3390/jimaging6120143

AMA Style

Nanni L, De Luca E, Facin ML, Maguolo G. Deep Learning and Handcrafted Features for Virus Image Classification. Journal of Imaging. 2020; 6(12):143. https://doi.org/10.3390/jimaging6120143

Chicago/Turabian Style

Nanni, Loris, Eugenio De Luca, Marco Ludovico Facin, and Gianluca Maguolo. 2020. "Deep Learning and Handcrafted Features for Virus Image Classification" Journal of Imaging 6, no. 12: 143. https://doi.org/10.3390/jimaging6120143

APA Style

Nanni, L., De Luca, E., Facin, M. L., & Maguolo, G. (2020). Deep Learning and Handcrafted Features for Virus Image Classification. Journal of Imaging, 6(12), 143. https://doi.org/10.3390/jimaging6120143

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop