Anomaly Detection in Nanofibrous Materials by CNN-Based Self-Similarity
<p>SEM images of nanofibrous materials. (<b>a</b>,<b>b</b>) samples without anomalies; (<b>c</b>,<b>d</b>) samples containing fine- and coarse-grained anomalies.</p> "> Figure 2
<p>(<b>a</b>) the input image <math display="inline"> <semantics> <mi mathvariant="bold">I</mi> </semantics> </math>; (<b>b</b>) the binary mask of anomalies <math display="inline"> <semantics> <msub> <mi mathvariant="sans-serif">Ω</mi> <mi mathvariant="bold">I</mi> </msub> </semantics> </math>. White pixels represent anomalies; (<b>c</b>) estimated mask of anomalies <math display="inline"> <semantics> <mover accent="true"> <msub> <mi mathvariant="sans-serif">Ω</mi> <mi mathvariant="bold">I</mi> </msub> <mo>˜</mo> </mover> </semantics> </math>. White pixels represent anomalies; (<b>d</b>) difference between <math display="inline"> <semantics> <msub> <mi mathvariant="sans-serif">Ω</mi> <mi mathvariant="bold">I</mi> </msub> </semantics> </math> and <math display="inline"> <semantics> <mover accent="true"> <msub> <mi mathvariant="sans-serif">Ω</mi> <mi mathvariant="bold">I</mi> </msub> <mo>˜</mo> </mover> </semantics> </math> overlaid on the test image. Green pixels represent <span class="html-italic">true positives</span>, red pixels represent <span class="html-italic">false positives</span>, blue pixels represent <span class="html-italic">false negatives</span>, and no color pixels represent <span class="html-italic">true negatives</span>.</p> "> Figure 3
<p>Simulated example of the map <math display="inline"> <semantics> <msub> <mi mathvariant="sans-serif">Θ</mi> <mi mathvariant="bold">I</mi> </msub> </semantics> </math> and corresponding binary mask of anomalies <math display="inline"> <semantics> <mover accent="true"> <msub> <mi mathvariant="sans-serif">Ω</mi> <mi mathvariant="bold">I</mi> </msub> <mo>˜</mo> </mover> </semantics> </math>. Here, <math display="inline"> <semantics> <mrow> <msub> <mi>w</mi> <mi>t</mi> </msub> <mo>=</mo> <msub> <mi>h</mi> <mi>t</mi> </msub> <mo>=</mo> <mn>5</mn> </mrow> </semantics> </math> and stride <math display="inline"> <semantics> <mrow> <mi>s</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math> and the variable <span class="html-italic">d</span> represents the value of the average visual similarity between the local patch and the most similar subregions of the dictionary <math display="inline"> <semantics> <mi mathvariant="script">W</mi> </semantics> </math>.</p> "> Figure 4
<p>Examples of dictionary achieved considering different patch sizes and different number of subregions.</p> "> Figure 5
<p>Examples of images corresponding to the features from the dictionary. Here, we show different dictionaries built with different patch sizes (16, 32, 64, 128) and number of clusters (10, 100, 500, 1000).</p> "> Figure 6
<p>Dimension of the feature vectors after Principal Component Analysis reduction.</p> "> Figure 7
<p>Area Under the Curve achieved with different variants of the proposed method. (<b>a</b>) average and standard deviation of the Area Under the Curve (AUC) achieved with different patch sizes whatever is the Convolutional Neural Network (CNN) layer adopted for feature extraction, whatever is the use of PCA to reduce the size of the feature vector and whatever is the number of words of the dictionary; (<b>b</b>) average of the AUC achieved with different patch sizes and number of words of the dictionary whatever is the CNN layer adopted for feature extraction and whatever is the use of PCA to reduce the size of the feature vector.</p> "> Figure 8
<p>Average time to process a test image. (<b>a</b>) time needed in the case of features extracted from the <tt>conv5_x</tt> of the CNN; (<b>b</b>) time needed in the case of features extracted from the <tt>avgpool</tt> of the CNN.</p> "> Figure 9
<p>Results from the two variants of the proposed method, one with <tt>conv5_x</tt> and the other with <tt>avgpool</tt>, and comparison with the method proposed by Carrera et al. [<a href="#B25-sensors-18-00209" class="html-bibr">25</a>]. Both the variants consider the PCA to reduce the feature vector, a patch size of 32 pixels and a dictionary of size 10. (<b>a</b>) Receiver Operating Characteristic (ROC) curves. For each ROC curve, the corresponding AUC values are in the legend; (<b>b</b>) box-plots reporting the distribution of the defect coverage obtained at a fixed False Positive Rate (FPR) = 5%.</p> "> Figure 10
<p>Closeup of the anomalies found by the proposed method. True positives, false positives, and false negatives are showed, respectively, as green, red and blue color. For visualization purposes, the images are slightly cropped and scaled to focus on fine- and coarse-grained anomalies.</p> ">
Abstract
:1. Introduction
2. Related Works
3. Problem Formulation
4. Proposed Method
4.1. Feature Extraction
4.2. Dictionary Building
4.3. Learning to Detect Anomalies
5. Dataset Description
6. Experiments
- patch size: , , , . The larger the patch is, the lower the computational time and the precision in defect localization;
- dictionary size: 10, 100, 500, and 1000 numbers of subregions. The larger the number is, the higher is the time to calculate the similarity between a test patch and the subregions of the dictionary and the better the performance;
- CNN layer output as feature vector: we use a ResNet-18 pre-trained on the images from ILSVRC 2015 (ImageNet Large Scale Visual Recognition Challenge) [54]. The input of the network is an RGB image of size . To adapt the input of the network to our problem, we up-sample the SEM image subregion to fit the network desired size and we convert the gray-scale SEM image to a RGB one by cloning the color channels. We take the output of the conv5_x of the network, which is a matrix . The output is linearized to be of size 25,088. Alternatively, we take the output of the average pooling layer (that we name avgpool). The 512-dimensional feature vector is obtained by linearizing the output matrix . All the feature vectors are L1 normalized;
- Feature dimensionality reduction: the larger the size of the feature vector is, the higher is the time to calculate the similarity between a test patch and the subregions of the dictionary. In the case of PCA, we consider to take the first principal components such that the retained variance of the data is about 95%. Figure 6 shows, in the case of use of PCA, the reduced sizes of the feature vectors. The smallest feature vector is obtained by combining the avgpool with a patch size of , while the largest is obtained by combining the conv5_x with a patch size of .
6.1. Performance Metrics
6.2. Results
7. Conclusions
Acknowledgments
Author Contributions
Conflicts of Interest
References
- Khaitan, S.K.; McCalley, J.D. Design techniques and applications of cyberphysical systems: A survey. IEEE Syst. J. 2015, 9, 350–365. [Google Scholar] [CrossRef]
- Wollschlaeger, M.; Sauter, T.; Jasperneite, J. The future of industrial communication: Automation networks in the era of the internet of things and industry 4.0. IEEE Ind. Electron. Mag. 2017, 11, 17–27. [Google Scholar] [CrossRef]
- Botta, A.; De Donato, W.; Persico, V.; Pescapé, A. Integration of cloud computing and internet of things: A survey. Future Gener. Comput. Syst. 2016, 56, 684–700. [Google Scholar] [CrossRef]
- Banavar, G.S. Cognitive computing: From breakthroughs in the lab to applications on the field. In Proceedings of the 2016 IEEE International Conference on Big Data (Big Data), Washington, DC, USA, 5–8 December 2016. [Google Scholar]
- Gilchrist, A. Introducing Industry 4.0. In Industry 4.0; Springer: Berlin, Germany, 2016; pp. 195–215. [Google Scholar]
- Lasi, H.; Fettke, P.; Kemper, H.G.; Feld, T.; Hoffmann, M. Industry 4.0. Bus. Inf. Syst. Eng. 2014, 6, 239–242. [Google Scholar] [CrossRef]
- Kumar, A. Computer-vision-based fabric defect detection: A survey. IEEE Trans. Ind. Electron. 2008, 55, 348–363. [Google Scholar] [CrossRef]
- Kumar, A.; Pang, G.K. Defect detection in textured materials using Gabor filters. IEEE Trans. Ind. Appl. 2002, 38, 425–440. [Google Scholar] [CrossRef]
- Chan, C.; Pang, G.K. Fabric defect detection by Fourier analysis. IEEE Trans. Ind. Appl. 2000, 36, 1267–1276. [Google Scholar] [CrossRef] [Green Version]
- Wheeler, D.A.; Brykczynski, B.; Meeson, R.N., Jr. Software Inspection: An Industry Best Practice for Defect Detection and Removal; IEEE Computer Society Press: Washington, DC, USA, 1996. [Google Scholar]
- Ramakrishna, S.; Fujihara, K.; Teo, W.E.; Yong, T.; Ma, Z.; Ramaseshan, R. Electrospun nanofibers: Solving global issues. Mater. Today 2006, 9, 40–50. [Google Scholar] [CrossRef]
- Burger, C.; Hsiao, B.S.; Chu, B. Nanofibrous materials and their applications. Annu. Rev. Mater. Res. 2006, 36, 333–368. [Google Scholar] [CrossRef]
- Ding, B.; Wang, M.; Yu, J.; Sun, G. Gas Sensors Based on Electrospun Nanofibers. Sensors 2009, 9, 1609–1624. [Google Scholar] [CrossRef] [PubMed]
- Liang, X.; Kim, T.H.; Yoon, J.W.; Kwak, C.H.; Lee, J.H. Ultrasensitive and ultraselective detection of H2S using electrospun CuO-loaded In2O3 nanofiber sensors assisted by pulse heating. Sens. Actuators B Chem. 2015, 209, 934–942. [Google Scholar] [CrossRef]
- Vasita, R.; Katti, D.S. Nanofibers and their applications in tissue engineering. Int. J. Nanomed. 2006, 1, 15. [Google Scholar] [CrossRef]
- Venugopal, J.; Ramakrishna, S. Applications of polymer nanofibers in biomedicine and biotechnology. Appl. Biochem. Biotechnol. 2005, 125, 147–157. [Google Scholar] [CrossRef]
- Bjorge, D.; Daels, N.; De Vrieze, S.; Dejans, P.; Van Camp, T.; Audenaert, W.; Hogie, J.; Westbroek, P.; De Clerck, K.; Van Hulle, S.W. Performance assessment of electrospun nanofibers for filter applications. Desalination 2009, 249, 942–948. [Google Scholar] [CrossRef]
- Huang, Z.M.; Zhang, Y.Z.; Kotaki, M.; Ramakrishna, S. A review on polymer nanofibers by electrospinning and their applications in nanocomposites. Compos. Sci. Technol. 2003, 63, 2223–2253. [Google Scholar] [CrossRef]
- Hajiali, H.; Heredia-Guerrero, J.A.; Liakos, I.; Athanassiou, A.; Mele, E. Alginate nanofibrous mats with adjustable degradation rate for regenerative medicine. Biomacromolecules 2015, 16, 936–943. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Guzman-Puyol, S.; Heredia-Guerrero, J.A.; Ceseracciu, L.; Hajiali, H.; Canale, C.; Scarpellini, A.; Cingolani, R.; Bayer, I.S.; Athanassiou, A.; Mele, E. Low-cost and effective fabrication of biocompatible nanofibers from silk and cellulose-rich materials. ACS Biomater. Sci. Eng. 2016, 2, 526–534. [Google Scholar] [CrossRef]
- Contardi, M.; Heredia-Guerrero, J.A.; Perotto, G.; Valentini, P.; Pompa, P.P.; Spanò, R.; Goldoni, L.; Bertorelli, R.; Athanassiou, A.; Bayer, I.S. Transparent ciprofloxacin-povidone antibiotic films and nanofiber mats as potential skin and wound care dressings. Eur. J. Pharm. Sci. 2017, 104, 133–144. [Google Scholar] [CrossRef] [PubMed]
- Romano, I.; Summa, M.; Heredia-Guerrero, J.A.; Spanò, R.; Ceseracciu, L.; Pignatelli, C.; Bertorelli, R.; Mele, E.; Athanassiou, A. Fumarate-loaded electrospun nanofibers with anti-inflammatory activity for fast recovery of mild skin burns. Biomed. Mater. 2016, 11, 041001. [Google Scholar] [CrossRef] [PubMed]
- Wei, K.; Kim, H.R.; Kim, B.S.; Kim, I.S. Electrospun metallic nanofibers fabricated by electrospinning and metallization. In Nanofibers-Production, Properties and Functional Applications; InTech: London, UK, 2011. [Google Scholar]
- Tucker, N.; Stanger, J.; Staiger, M.; Razzaq, H.; Hofman, K. The history of the science and technology of electrospinning from 1600 to 1995. J. Eng. Fibers Fabr. 2012, 7, 63–73. [Google Scholar]
- Carrera, D.; Manganini, F.; Boracchi, G.; Lanzarone, E. Defect Detection in SEM Images of Nanofibrous Materials. IEEE Trans. Ind. Inform. 2017, 13, 551–561. [Google Scholar] [CrossRef]
- Carrera, D.; Manganini, F.; Boracchi, G.; Lanzarone, E. Defect Detection in Nanostructures. In CNR IMATI REPORT Series; IMATI CNR: Pavia, Italy, 2016. [Google Scholar]
- Shi, C.; Luu, D.K.; Yang, Q.; Liu, J.; Chen, J.; Ru, C.; Xie, S.; Luo, J.; Ge, J.; Sun, Y. Recent advances in nanorobotic manipulation inside scanning electron microscopes. Microsyst. Nanoeng. 2016, 2, 16024. [Google Scholar] [CrossRef]
- Yun, K.M.; Hogan, C.J.; Matsubayashi, Y.; Kawabe, M.; Iskandar, F.; Okuyama, K. Nanoparticle filtration by electrospun polymer fibers. Chem. Eng. Sci. 2007, 62, 4751–4759. [Google Scholar] [CrossRef]
- Chandola, V.; Banerjee, A.; Kumar, V. Anomaly detection: A survey. ACM Comput. Surv. 2009, 41, 15. [Google Scholar] [CrossRef]
- Navarro, P.J.; Fernández-Isla, C.; Alcover, P.M.; Suardíaz, J. Defect Detection in Textures through the Use of Entropy as a Means for Automatically Selecting the Wavelet Decomposition Level. Sensors 2016, 16, 1178. [Google Scholar] [CrossRef] [PubMed]
- Micucci, D.; Mobilio, M.; Napoletano, P.; Tisato, F. Falls as anomalies? An experimental evaluation using smartphone accelerometer data. J. Ambient Intell. Humaniz. Comput. 2017, 8, 87–99. [Google Scholar] [CrossRef]
- Berry, M.W.; Castellanos, M. Survey of Text Mining II; Springer: Berlin, Germany, 2008; Volume 6. [Google Scholar]
- Rajasegarar, S.; Leckie, C.; Palaniswami, M.; Bezdek, J.C. Distributed anomaly detection in wireless sensor networks. In Proceedings of the 10th IEEE Singapore International Conference on Communication Systems (ICCS 2006), Singapore, 30 October–1 November 2006. [Google Scholar]
- Pimentel, M.A.; Clifton, D.A.; Clifton, L.; Tarassenko, L. A review of novelty detection. Signal Process. 2014, 99, 215–249. [Google Scholar] [CrossRef]
- Boracchi, G.; Carrera, D.; Wohlberg, B. Novelty detection in images by sparse representations. In Proceedings of the 2014 IEEE Symposium on Intelligent Embedded Systems (IES), Orlando, FL, USA, 9–12 December 2014; pp. 47–54. [Google Scholar]
- Zujovic, J.; Pappas, T.N.; Neuhoff, D.L. Structural texture similarity metrics for image analysis and retrieval. IEEE Trans. Image Process. 2013, 22, 2545–2558. [Google Scholar] [CrossRef] [PubMed]
- Adler, A.; Elad, M.; Hel-Or, Y.; Rivlin, E. Sparse coding with anomaly detection. J. Signal Process. Syst. 2015, 79, 179–188. [Google Scholar] [CrossRef]
- Cusano, C.; Napoletano, P.; Schettini, R. Intensity and color descriptors for texture classification. In Image Processing: Machine Vision Applications VI; SPIE: Bellingham, WA, USA, 2013; Volume 8661, p. 866113. [Google Scholar]
- Napoletano, P. Hand-Crafted vs Learned Descriptors for Color Texture Classification. In Proceedings of the International Workshop on Computational Color Imaging, Milan, Italy, 29–31 March 2017; pp. 259–271. [Google Scholar]
- Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [PubMed]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, Proceedings of the 2012 Annual Conference on Neural Information Processing Systems (NIPS), Stateline, NV, USA, 3–8 December 2012; MIT Press: Cambridge, MA, USA; pp. 1097–1105.
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Li, F. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Razavian, A.S.; Azizpour, H.; Sullivan, J.; Carlsson, S. CNN features off-the-shelf: An astounding baseline for recognition. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Columbus, OH, USA, 24–27 June 2014; pp. 512–519. [Google Scholar]
- Vedaldi, A.; Lenc, K. MatConvNet—Convolutional Neural Networks for MATLAB. arXiv, 2014; arXiv:1412.4564. [Google Scholar]
- Napoletano, P. Visual descriptors for content-based retrieval of remote-sensing images. Int. J. Remote Sens. 2018, 39, 1–34. [Google Scholar] [CrossRef]
- Bianco, S.; Celona, L.; Napoletano, P.; Schettini, R. On the Use of Deep Learning for Blind Image Quality Assessment. arXiv, 2017; arXiv:1602.05531. [Google Scholar]
- Cusano, C.; Napoletano, P.; Schettini, R. Combining multiple features for color texture classification. J. Electron. Imaging 2016, 25, 061410. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv, 2014; arXiv:1409.1556. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Stanford, CA, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Cusano, C.; Napoletano, P.; Schettini, R. Evaluating color texture descriptors under large variations of controlled lighting conditions. J. Opt. Soc. Am. A 2016, 33, 17–30. [Google Scholar] [CrossRef] [PubMed]
- Wold, S.; Esbensen, K.; Geladi, P. Principal component analysis. Chemom. Intell. Lab. Syst. 1987, 2, 37–52. [Google Scholar] [CrossRef]
- NanoTWICE: NANOcomposite NANOfibres for Treatment of Air and Water by an Industrial Conception of Electrospinning. Available online: http://www.mi.imati.cnr.it/ettore/NanoTWICE/ (accessed on 12 January 2018).
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef]
- Arthur, D.; Vassilvitskii, S. k-means++: The advantages of careful seeding. In Proceedings of the 18th Annual ACM-SIAM Symposium on Discrete Algorithms, New Orleans, LA, USA, 7–9 January 2007; pp. 1027–1035. [Google Scholar]
- PyTorch. Available online: http://pytorch.org/ (accessed on 12 January 2018).
Layer Name | Output Size | ResNet-18 |
---|---|---|
conv1 | , 64, stride 2 | |
conv2_x | max pool, stride 2 | |
× 2 | ||
conv3_x | × 2 | |
conv4_x | × 2 | |
conv5_x | × 2 | |
average pool | average pool | |
fully connected | 1000 | fully connections |
softmax | 1000 |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Napoletano, P.; Piccoli, F.; Schettini, R. Anomaly Detection in Nanofibrous Materials by CNN-Based Self-Similarity. Sensors 2018, 18, 209. https://doi.org/10.3390/s18010209
Napoletano P, Piccoli F, Schettini R. Anomaly Detection in Nanofibrous Materials by CNN-Based Self-Similarity. Sensors. 2018; 18(1):209. https://doi.org/10.3390/s18010209
Chicago/Turabian StyleNapoletano, Paolo, Flavio Piccoli, and Raimondo Schettini. 2018. "Anomaly Detection in Nanofibrous Materials by CNN-Based Self-Similarity" Sensors 18, no. 1: 209. https://doi.org/10.3390/s18010209
APA StyleNapoletano, P., Piccoli, F., & Schettini, R. (2018). Anomaly Detection in Nanofibrous Materials by CNN-Based Self-Similarity. Sensors, 18(1), 209. https://doi.org/10.3390/s18010209