Abstract
With the increasing use of social networks and mobile devices, the number of videos posted on the Internet is growing exponentially. Among the inappropriate contents published on the Internet, pornography is one of the most worrying as it can be accessed by teens and children. Two spatiotemporal CNNs, VGG-C3D CNN and ResNet R\((2+1)\)D CNN, were assessed for pornography detection in videos in the present study. Experimental results using the Pornography-800 dataset showed that these spatiotemporal CNNs performed better than some state-of-the-art methods based on bag of visual words and are competitive with other CNN-based approaches, reaching accuracy of \(95.1\%\).
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
With the increasing use of social networks and mobile devices, the number of videos posted on the Internet is growing exponentially. Thus, the recognition of videos with unwanted content (e.g. pornography, violence, scenes with blood, etc.) becomes essential. Pornography is probably the type of unwanted content that causes most problems because it is inappropriate for some ages, inappropriate for some environments (e.g. public places, schools, workplace), and unwanted by some people who do not like to be exposed to this material. Another important aspect is that some of this content might be prohibited by law from being produced and disseminated, as in the case of child pornography, which is considered a crime in several countries.
Video understanding is a challenging task in the fields of Computer Vision and Pattern Recognition and has been studied for many decades. Much of the research developed in these areas are focused on creating spatiotemporal descriptors for video understanding. The most relevant studies that deal with hand-crafted feature extraction from videos include those based on spatiotemporal interest points: STIPs (HARRIS3D) [12], SIFT-3D [17], HOG3D [10], MBH [6] and Cuboids [7]. These efforts are based on 2D image descriptors and use different encoding schemes based in pyramids and histograms. In addition to these, another very important state-of-the-art method is the improved Dense Trajectories (iDT) [24], which presents good performance in tasks related to video understanding.
The great development of methodologies that use deep learning in still-image recognition tasks, driven by the development of the AlexNet network [11], increased the interest in research using deep learning techniques applied to videos. Some approaches proposed to apply trained CNN in images to extract features from individual video frames and then fuse these features into a descriptor with fixed size using pooling and high-dimensional encoding. Another alternative is to use 3D spatiotemporal CNNs. Ji et al. [8], for instance, proposed 3D CNN spatiotemporal convolutions to recognize human actions in videos. Simonyan and Zisserman [18] introduced a new influential two-stream framework approach based on CNNs, in which deep motion features extracted from the optical flow are fused with traditional CNN activations computed from RGB input. In [16] the authors use this two-stream framework approach based on CNNs to detect pornography in videos.
Two spatiotemporal-based CNNs proposed in literature, VGG-C3D CNN [21] and ResNet R(\(2+1\))D CNN [22], were used in the present study for pornography detection in videos. To the best of our knowledge, this is the first study to use 3D CNN to detect pornography in videos.
2 VGG C3D CNN
In [21] the authors realized that a homogeneous setting with convolution kernels of \(3\times 3\times 3\) is the best option for 3D CNNs. This is similar to the 2D CNNs proposed in [19], which are also known as VGG. By using a dataset with a huge amount of data, it is possible to train a 3D CNN with \(3\times 3\times 3\) kernel as deep as possible, due to the amount of memory available in current GPUs. The authors designed the 3D CNN to have 8 convolution layers, 5 pooling layers, followed by two fully connected layers, and a softmax output layer. Figure 1 shows the 3D Spatiotemporal CNN proposed in [21] (called VGG-C3D in this paper). The 3D convolution filters of VGG-C3D are of dimension \(3\times 3\times 3\) with stride \(1\times 1\times 1\). In turn, the 3D pooling layers are \(2\times 2\times 2\) with stride also of \(2\times 2\times 2\), except for pool1 which presents kernel size of \(1\times 2\times 2\) and stride \(1\times 2\times 2\) with the intention of preserving the temporal information at the early phase. Each fully connected layer has 4,096 output units.
The model provided by the authors [21], which was trained on the Sports-1M dataset in train split, was used in the present study. Sports-1M was created by Google Research and Stanford Computer Science Department and contains 1,133,158 videos of 487 sports classes. Since Sports-1M has many long videos, five 2-seconds long clips were randomly extracted from every training video. The clips were then resized to have a frame size of \(128\times 171\). During the training phase, the clips were randomly cropped into \(16\times 112\times 112\) crops for spatial and temporal jittering, and horizontally flipped with 50% probability. The training was done by Stochastic Gradient Descent (SGD) with a minibatch size of 30 examples. The initial learning rate was of 0.003 and was divided by 2 every 150 K iterations. The optimization was stopped at 1.9 M iterations (about 13 epochs).
After training, the VGG-C3D may be used as feature extractor. In order to extract features, a video needs to be split into clips with 16 frames in length. For the present study, clips with an 8-frame overlap between two consecutive clips were used. After that, the clips were submitted to the VGG-C3D to extract fc6 activations. Each video may have an arbitrary number of clips, so to generate only one descriptor for each video the fc6 activations were averaged to form a 4,096-sized descriptor, followed by L2-normalization.
To evaluate the VGG-C3D features extracted from the Pornography-800 dataset [2], fc6 features were extracted from all clips and then projected to 2D spacing using the t-SNE [13] (Fig. 2a) and PCA (Fig. 2b). It is worth noting that no fine-tuning was conducted to verify if the model showed good generalization capability across the datasets. Figure 2 illustrates that the VGG-C3D features are semantically separable, although samples of pornography and difficult non-pornography presented some overlapping.
3 ResNet R(\(2+1\))D CNN
Recent studies have indicated that replacing 3D convolutions by two operations, a 2D spatial convolution and a 1D temporal convolution, can improve the efficiency of 3D CNN models. In [22], the authors designed a new spatiotemporal convolutional block, R(\(2+1\))D, that explicitly factorizes 3D convolution into two separate and successive operations, a 2D spatial convolution and a 1D temporal convolution. Using this architecture, we can add nonlinear rectification like ReLU between 2D and 1D convolution. This would double the number of nonlinearities compared to a 3D CNN, but with the same number of parameters to optimize, allowing the model to represent more complex functions. Moreover, the decomposition into two convolutions makes the optimization process easier, producing in practice less training loss and less test loss.
Another method proposed by [25] showed that replacing 3D convolutions with spatiotemporal-separable 3D convolutions makes the model \(1.5\times \) more computationally efficient (in terms of FLOPS) than 3D convolutions.
Experiments performed in [22] demonstrated that ResNets adopting homogeneous (\(2+1\))D blocks in all layers, achieved state-of-the-art performance on both Kinetics and Sports-1M datasets.
Spatiotemporal decomposition can be applied to any 3D convolutional layer. An illustration of this decomposition is given in Fig. 3 for the simplified setting, where the input tensor contains a single channel.
The architecture proposed in [22] was applied in the present study. This relatively simple structure was based on deep residual networks, which have shown good performance. Table 1 presents the architecture details of R(\(2+1\))D.
Experiments were conducted using a model that had been pre-trained on Kinetics dataset. A transfer learning technique was applied to fine-tune the model on the Pornography-800 dataset. The R(\(2+1\))D network used had 34 layers and videos frames were resized to \(128\times 171\), with each clip generated by randomly cropping \(112\times 112\) windows. A total of 32 consecutive frames were randomly sampled from each video applying temporal jittering during the process of fine-tuning.
Although Pornography-800 has only about 640 training videos in each split, epoch size was set at 2,560 for temporal jittering considering 4 clips for each training video per epoch. This setup was chosen to optimize the training time since the videos have different sizes.
Batch normalization was applied to all convolutional layers and mini-batch size was set to 4 clips due to GPU memory limitations. The initial learning rate was set to 0.0001 and divided by 10 every 2 epochs, while the process of fine-tuning was conducted in 8 epochs. In the classification phase, the videos were split into 32-frame long clips. ResNet R(\(2+1\))D CNN was used on clips with 16 frames that overlap between two consecutive clips to extract features and for softmax classification. Each video can have an arbitrary number of clips, so average pooling on softmax probabilities was conducted to aggregate predictions over clips to obtain video-level prediction.
4 Experiments and Results
The Pornography-800 dataset [2] was chosen to evaluate the 3D CNNs used in the present study. This dataset contains 800 videos, representing a total of 80 h, which encompass 400 pornography videos and 400 non-pornography videos. Figure 4 shows some selected frames from a small sample of this dataset, illustrating the diversity and challenges posed.
The VGG-C3D network evaluated in the present study was developed using Caffe [9] and the ResNet R(\(2+1\))D architecture was developed using Caffe2Footnote 1. All experiments were run on a computer with an Intel Xeon E5-2630 v3 2.40 GHz processor, 32 GB RAM and a NVIDIA Titan XP GPU with 12 GB of memory. The results presented are the mean value obtained from the 5 splits of the Pornography-800 dataset using 5-fold-cross-validation protocol (640 videos in training set and 160 in the test set on each fold, which is the same protocol proposed in [2]).
Table 2 shows the accuracy of both approaches: VGG-C3D with a Linear SVM classifier and ResNet R(\(2+1\))D CNN with softmax classifier. The VGG-C3D architecture with a linear SVM classifier achieved a better performance, with accuracy of 95.1%, while with the ResNet R(\(2+1\))D architecture using the softmax classifier achieved accuracy of 91.8%.
Table 3 presents some results reported in the literature obtained applying other methods to the Pornography-800 database. As observed in Tables 2 and 3 the CNN-based methods outperform all methods based on bag of visual words.
5 Conclusion and Future Work
The experimental results obtained in the present study on the Pornography-800 dataset showed that the spatiotemporal CNNs adopted (VGG-C3D and ResNet R(\(2+1\))D) performed better than all methods based on bag of visual words compared. Moreover, these spatiotemporal CNNs were competitive with other CNN-based approaches observed, reaching accuracy of \(95.1\%\).
With recent creation and availability of large video databases, along with the evolution of GPUs, we believe that 3D CNNs will be able to achieve a state-of-the-art level in video understanding tasks, similar to what happened with the launch of AlexNet. The proof of this is that a 3D CNN (I3D) [5] has recently reached the best result in the Kinetics database.
Future research is expected to: apply VGG-C3D and ResNet R(\(2+1\))D CNNs to larger databases; fuse VGG-C3D features with the iDT; to evaluate the behavior of the 3D CNNs using the Optical Flow and the fusion with the RGB; and use the ResNet R(\(2+1\))D CNN as feature extractor.
Notes
References
Avila, S., Thome, N., Cord, M., Valle, E., Araújo, A.D.A.: Bossa: extended bow formalism for image classification. In: 18th IEEE ICIP, pp. 2909–2912 (2011)
Avila, S., Thome, N., Cord, M., Valle, E., Araújo, A.D.A.: Pooling in image representation: the visual codeword point of view. Comput. Vis. Image Underst. 117(5), 453–465 (2013)
Caetano, C., Avila, S., Guimarães, S., Araújo, A.D.A.: Pornography detection using BossaNova video descriptor. In: 2014 22nd (EUSIPCO), pp. 1681–1685 (2014)
Caetano, C., Avila, S., Schwartz, W.R., Guimarães, S.J.F., Araújo, A.D.A.: A mid-level video representation based on binary descriptors: a case study for pornography detection. CoRR abs/1605.03804 (2016)
Carreira, J., Zisserman, A.: Quo vadis, action recognition? A new model and the kinetics dataset. CoRR abs/1705.07750 (2017)
Dalal, N., Triggs, B., Schmid, C.: Human detection using oriented histograms of flow and appearance. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3952, pp. 428–441. Springer, Heidelberg (2006). https://doi.org/10.1007/11744047_33
Dollar, P., Rabaud, V., Cottrell, G., Belongie, S.: Behavior recognition via sparse spatio-temporal features. In: 2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, pp. 65–72 (2005)
Ji, S., Xu, W., Yang, M., Yu, K.: 3D convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 221–231 (2013)
Jia, Y., et al.: Caffe: convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093 (2014)
Klaser, A., Marszalek, M., Schmid, C.: A spatio-temporal descriptor based on 3D-gradients. In: Everingham, M., Needham, C., Fraile, R. (eds.) BMVC 2008–19th British Machine Vision Conference, pp. 275:1–10. British Machine Vision Association, Leeds, United Kingdom (2008)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, NIPS 2012, pp. 1097–1105. Curran Associates Inc., USA (2012)
Laptev, I., Lindeberg, T.: Space-time interest points. In: Proceedings Ninth IEEE International Conference on Computer Vision, vol. 1, pp. 432–439 (2003)
van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008)
Moreira, D., et al.: Pornography classification: the hidden clues invideo spacetime. Forensic Sci. Int. 268, 46–61 (2016)
Moustafa, M.: Applying deep learning to classify pornographic images and videos. CoRR abs/1511.08899 (2015)
Perez, M., et al.: Video pornography detection through deep learning techniques and motion information. Neurocomputing 230, 279–293 (2017)
Scovanner, P., Ali, S., Shah, M.: A 3-dimensional sift descriptor and its application to action recognition. In: Proceedings of the 15th ACM International Conference on Multimedia, MM 2007, pp. 357–360. ACM, New York (2007)
Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition in videos. In: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems 27, pp. 568–576. Curran Associates, Inc. (2014)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2014)
de Souza, F.D.M., Valle, E., Cámara-Chávez, G., Araújo, A.: An evaluation on color invariant based local spatiotemporal features for action recognition. In: IEEE SIBGRAPI (2012)
Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3d convolutional networks. In: IEEE ICCV, pp. 4489–4497. Washington, DC, USA (2015)
Tran, D., Wang, H., Torresani, L., Ray, J., LeCun, Y., Paluri, M.: A closer look at spatiotemporal convolutions for action recognition. CoRR abs/1711.11248 (2017)
Valle, E., de Avila, S., da Luz Jr., A., de Souza, F., Coelho, M., Araújo, A.: Content-based filtering for video sharing social networks. CoRR abs/1101.2427 (2011)
Wang, H., Schmid, C.: Action recognition with improved trajectories. In: 2013 IEEE International Conference on Computer Vision, pp. 3551–3558 (2013)
Xie, S., Sun, C., Huang, J., Tu, Z., Murphy, K.: Rethinking spatiotemporal feature learning for video understanding. CoRR abs/1712.04851 (2017)
Acknowledgments
We thank NVIDIA Corporation for the donation of the GPU used in this study. This study was financed in part by CAPES - Brazil (Finance Code 001).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
da Silva, M.V., Marana, A.N. (2019). Spatiotemporal CNNs for Pornography Detection in Videos. In: Vera-Rodriguez, R., Fierrez, J., Morales, A. (eds) Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications. CIARP 2018. Lecture Notes in Computer Science(), vol 11401. Springer, Cham. https://doi.org/10.1007/978-3-030-13469-3_64
Download citation
DOI: https://doi.org/10.1007/978-3-030-13469-3_64
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-13468-6
Online ISBN: 978-3-030-13469-3
eBook Packages: Computer ScienceComputer Science (R0)