Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (6)

Search Parameters:
Keywords = shoplifting

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 6105 KiB  
Article
Shoplifting Detection Using Hybrid Neural Network CNN-BiLSMT and Development of Benchmark Dataset
by Iqra Muneer, Mubbashar Saddique, Zulfiqar Habib and Heba G. Mohamed
Appl. Sci. 2023, 13(14), 8341; https://doi.org/10.3390/app13148341 - 19 Jul 2023
Cited by 2 | Viewed by 6397
Abstract
Shoplifting poses a significant challenge for shop owners as well as other stakeholders, including law enforcement agencies. In recent years, the task of shoplifting detection has gained the interest of researchers due to video surveillance generating vast quantities of data that cannot be [...] Read more.
Shoplifting poses a significant challenge for shop owners as well as other stakeholders, including law enforcement agencies. In recent years, the task of shoplifting detection has gained the interest of researchers due to video surveillance generating vast quantities of data that cannot be processed in real-time by human staff. In previous studies, different datasets and methods have been developed for the task of shoplifting detection. However, there is a lack of a large benchmark dataset containing different behaviors of shoplifting and standard methods for the task of shoplifting detection. To overcome this limitation, in this study, a large benchmark dataset has been developed, having 900 instances with 450 cases of shoplifting and 450 of non-shoplifting with manual annotation based on five different ways of shoplifting. Moreover, a method for the detection of shoplifting is proposed for evaluating the developed dataset. The dataset is also evaluated with methods as baseline methods, including 2D CNN and 3D CNN. Our proposed method, which is a combination of Inception V3 and BILSTM, outperforms all baseline methods with 81 % accuracy. The developed dataset will be publicly available to foster in various areas related to human activity recognition. These areas encompass the development of systems for detecting behaviors such as robbery, identifying human movements, enhancing safety measures, and detecting instances of theft. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>A boy putting the item in inner pocket.</p>
Full article ">Figure 2
<p>A boy putting the item in T-shirt’s front pocket.</p>
Full article ">Figure 3
<p>A boy putting the item in a fully opened bag.</p>
Full article ">Figure 4
<p>Second person is putting the item into first’s bag.</p>
Full article ">Figure 5
<p>A boy putting the item in the opened portion of closed bag.</p>
Full article ">Figure 6
<p>A boy putting the item in trouser pocket.</p>
Full article ">Figure 7
<p>Proposed architecutre.</p>
Full article ">Figure 8
<p>Architecture of Inception V3.</p>
Full article ">Figure 9
<p>Architecture of Bi-LSTM.</p>
Full article ">
19 pages, 6553 KiB  
Article
Suspicious Behavior Detection with Temporal Feature Extraction and Time-Series Classification for Shoplifting Crime Prevention
by Amril Nazir, Rohan Mitra, Hana Sulieman and Firuz Kamalov
Sensors 2023, 23(13), 5811; https://doi.org/10.3390/s23135811 - 22 Jun 2023
Cited by 6 | Viewed by 3423
Abstract
The rise in crime rates in many parts of the world, coupled with advancements in computer vision, has increased the need for automated crime detection services. To address this issue, we propose a new approach for detecting suspicious behavior as a means of [...] Read more.
The rise in crime rates in many parts of the world, coupled with advancements in computer vision, has increased the need for automated crime detection services. To address this issue, we propose a new approach for detecting suspicious behavior as a means of preventing shoplifting. Existing methods are based on the use of convolutional neural networks that rely on extracting spatial features from pixel values. In contrast, our proposed method employs object detection based on YOLOv5 with Deep Sort to track people through a video, using the resulting bounding box coordinates as temporal features. The extracted temporal features are then modeled as a time-series classification problem. The proposed method was tested on the popular UCF Crime dataset, and benchmarked against the current state-of-the-art robust temporal feature magnitude (RTFM) method, which relies on the Inflated 3D ConvNet (I3D) preprocessing method. Our results demonstrate an impressive 8.45-fold increase in detection inference speed compared to the state-of-the-art RTFM, along with an F1 score of 92%,outperforming RTFM by 3%. Furthermore, our method achieved these results without requiring expensive data augmentation or image feature extraction. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Extracted dataset format.</p>
Full article ">Figure 2
<p>Description of the columns in <a href="#sensors-23-05811-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 3
<p>Proposed model pipeline.</p>
Full article ">Figure 4
<p>Capturing bounding boxes with YOLOv5 with Deep Sort.</p>
Full article ">Figure 5
<p>Inception module proposed in [<a href="#B28-sensors-23-05811" class="html-bibr">28</a>].</p>
Full article ">Figure 6
<p>Inception network proposed in [<a href="#B28-sensors-23-05811" class="html-bibr">28</a>].</p>
Full article ">Figure 7
<p>XceptionTime module proposed in [<a href="#B29-sensors-23-05811" class="html-bibr">29</a>].</p>
Full article ">Figure 8
<p>Overall XceptionTime architecture proposed in [<a href="#B29-sensors-23-05811" class="html-bibr">29</a>].</p>
Full article ">Figure 9
<p>XCM architecture proposed in [<a href="#B30-sensors-23-05811" class="html-bibr">30</a>].</p>
Full article ">Figure 10
<p>Confusion matrix for the best models.</p>
Full article ">Figure 11
<p>F1 score distributions across 10-fold cross validation for each model.</p>
Full article ">Figure 12
<p>RTFM confusion matrix.</p>
Full article ">Figure 13
<p>RTFM misclassifications that the proposed method classified correctly.</p>
Full article ">
12 pages, 1709 KiB  
Article
Detection of Shoplifting on Video Using a Hybrid Network
by Lyudmyla Kirichenko, Tamara Radivilova, Bohdan Sydorenko and Sergiy Yakovlev
Computation 2022, 10(11), 199; https://doi.org/10.3390/computation10110199 - 6 Nov 2022
Cited by 7 | Viewed by 5760
Abstract
Shoplifting is a major problem for shop owners and many other parties, including the police. Video surveillance generates huge amounts of information that staff cannot process in real time. In this article, the problem of detecting shoplifting in video records was solved using [...] Read more.
Shoplifting is a major problem for shop owners and many other parties, including the police. Video surveillance generates huge amounts of information that staff cannot process in real time. In this article, the problem of detecting shoplifting in video records was solved using a classifier, which was a hybrid neural network. The hybrid neural network included convolutional and recurrent ones. The convolutional network was used to extract features from the video frames. The recurrent network processed the time sequence of the video frames features and classified the video fragments. In this work, gated recurrent units were selected as the recurrent network. The well-known UCF-Crime dataset was used to form the training and test datasets. The classification results showed a high accuracy of 93%, which was higher than the accuracy of the classifiers considered in the review. Further research will focus on the practical implementation of the proposed hybrid neural network. Full article
Show Figures

Figure 1

Figure 1
<p>Video frames: (<b>a</b>) not shoplifting; (<b>b</b>) shoplifting.</p>
Full article ">Figure 2
<p>The main stages of the classification algorithm.</p>
Full article ">Figure 3
<p>Training and validation accuracy depending on the epoch.</p>
Full article ">Figure 4
<p>Training and validation loss values depending on the epoch.</p>
Full article ">Figure 5
<p>ROC curve and AUC value.</p>
Full article ">
18 pages, 894 KiB  
Article
Using Social Signals to Predict Shoplifting: A Transparent Approach to a Sensitive Activity Analysis Problem
by Shane Reid, Sonya Coleman, Philip Vance, Dermot Kerr and Siobhan O’Neill
Sensors 2021, 21(20), 6812; https://doi.org/10.3390/s21206812 - 13 Oct 2021
Viewed by 3021
Abstract
Retail shoplifting is one of the most prevalent forms of theft and has accounted for over one billion GBP in losses for UK retailers in 2018. An automated approach to detecting behaviours associated with shoplifting using surveillance footage could help reduce these losses. [...] Read more.
Retail shoplifting is one of the most prevalent forms of theft and has accounted for over one billion GBP in losses for UK retailers in 2018. An automated approach to detecting behaviours associated with shoplifting using surveillance footage could help reduce these losses. Until recently, most state-of-the-art vision-based approaches to this problem have relied heavily on the use of black box deep learning models. While these models have been shown to achieve very high accuracy, this lack of understanding on how decisions are made raises concerns about potential bias in the models. This limits the ability of retailers to implement these solutions, as several high-profile legal cases have recently ruled that evidence taken from these black box methods is inadmissible in court. There is an urgent need to develop models which can achieve high accuracy while providing the necessary transparency. One way to alleviate this problem is through the use of social signal processing to add a layer of understanding in the development of transparent models for this task. To this end, we present a social signal processing model for the problem of shoplifting prediction which has been trained and validated using a novel dataset of manually annotated shoplifting videos. The resulting model provides a high degree of understanding and achieves accuracy comparable with current state of the art black box methods. Full article
Show Figures

Figure 1

Figure 1
<p>The black box model for shoplifting detection, where a raw video sequence is used to train a black box algorithm (such as a 3D CNN as in [<a href="#B6-sensors-21-06812" class="html-bibr">6</a>]) to detect suspicious individuals. The nature of these models makes them difficult to interpret and susceptible to bias in the training data.</p>
Full article ">Figure 2
<p>The transparent social signal processing model for shoplifting detection.</p>
Full article ">Figure 3
<p>Sample frame from the UCF crimes shoplifting video dataset.</p>
Full article ">
14 pages, 470 KiB  
Article
The Influence of Alcohol Consumption on Fighting, Shoplifting and Vandalism in Young Adults
by Ieuan Evans, Jon Heron, Joseph Murray, Matthew Hickman and Gemma Hammerton
Int. J. Environ. Res. Public Health 2021, 18(7), 3509; https://doi.org/10.3390/ijerph18073509 - 28 Mar 2021
Cited by 3 | Viewed by 3630
Abstract
Experimental studies support the conventional belief that people behave more aggressively whilst under the influence of alcohol. To examine how these experimental findings manifest in real life situations, this study uses a method for estimating evidence for causality with observational data—‘situational decomposition’ to [...] Read more.
Experimental studies support the conventional belief that people behave more aggressively whilst under the influence of alcohol. To examine how these experimental findings manifest in real life situations, this study uses a method for estimating evidence for causality with observational data—‘situational decomposition’ to examine the association between alcohol consumption and crime in young adults from the Avon Longitudinal Study of Parents and Children. Self-report questionnaires were completed at age 24 years to assess typical alcohol consumption and frequency, participation in fighting, shoplifting and vandalism in the previous year, and whether these crimes were committed under the influence of alcohol. Situational decomposition compares the strength of two associations, (1) the total association between alcohol consumption and crime (sober or intoxicated) versus (2) the association between alcohol consumption and crime committed while sober. There was an association between typical alcohol consumption and total crime for fighting [OR (95% CI): 1.47 (1.29, 1.67)], shoplifting [OR (95% CI): 1.25 (1.12, 1.40)], and vandalism [OR (95% CI): 1.33 (1.12, 1.57)]. The associations for both fighting and shoplifting had a small causal component (with the association for sober crime slightly smaller than the association for total crime). However, the association for vandalism had a larger causal component. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Predicted odds of total fighting (solid black line) and sober fighting (dashed grey line) according to units of alcohol consumed in a typical drinking session; (<b>b</b>) Predicted odds of total shoplifting (solid black line) and sober shoplifting (dashed grey line) according to units of alcohol consumed in a typical drinking session; (<b>c</b>) Predicted odds of total vandalism (solid black line) and sober vandalism (dashed grey line) according to units of alcohol consumed in a typical drinking session. In each figure, the solid black line represents the total association (both non-causal and causal), and the dashed grey line represents the non-causal association; a black line steeper than the grey line indicates a causal effect of alcohol on crime.</p>
Full article ">
25 pages, 7083 KiB  
Article
Criminal Intention Detection at Early Stages of Shoplifting Cases by Using 3D Convolutional Neural Networks
by Guillermo A. Martínez-Mascorro, José R. Abreu-Pederzini, José C. Ortiz-Bayliss, Angel Garcia-Collantes and Hugo Terashima-Marín
Computation 2021, 9(2), 24; https://doi.org/10.3390/computation9020024 - 23 Feb 2021
Cited by 31 | Viewed by 5827
Abstract
Crime generates significant losses, both human and economic. Every year, billions of dollars are lost due to attacks, crimes, and scams. Surveillance video camera networks generate vast amounts of data, and the surveillance staff cannot process all the information in real-time. Human sight [...] Read more.
Crime generates significant losses, both human and economic. Every year, billions of dollars are lost due to attacks, crimes, and scams. Surveillance video camera networks generate vast amounts of data, and the surveillance staff cannot process all the information in real-time. Human sight has critical limitations. Among those limitations, visual focus is one of the most critical when dealing with surveillance. For example, in a surveillance room, a crime can occur in a different screen segment or on a distinct monitor, and the surveillance staff may overlook it. Our proposal focuses on shoplifting crimes by analyzing situations that an average person will consider as typical conditions, but may eventually lead to a crime. While other approaches identify the crime itself, we instead model suspicious behavior—the one that may occur before the build-up phase of a crime—by detecting precise segments of a video with a high probability of containing a shoplifting crime. By doing so, we provide the staff with more opportunities to act and prevent crime. We implemented a 3DCNN model as a video feature extractor and tested its performance on a dataset composed of daily action and shoplifting samples. The results are encouraging as the model correctly classifies suspicious behavior in most of the scenarios where it was tested. For example, when classifying suspicious behavior, the best model generated in this work obtains precision and recall values of 0.8571 and 1 in one of the test scenarios, respectively. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

Figure 1
<p>Different situations may be recorded by surveillance cameras. Suspicious behavior is not the crime itself. However, particular situations will make us distrust a person if we consider their behavior to be “suspicious”.</p>
Full article ">Figure 2
<p>Video segmentation by using the moments obtained from the Pre-Crime Behavior Segment (PCB) method.</p>
Full article ">Figure 3
<p>Graphical representation of the process for suspicious behavior sample extraction.</p>
Full article ">Figure 4
<p>Architecture of the DL Model used for this investigation. The depth of the kernel for the 3D convolution is adjusted to 10, 30, or 90 frames, according to each particular experiment (see <a href="#sec4-computation-09-00024" class="html-sec">Section 4</a>).</p>
Full article ">Figure 5
<p>Overview of the experimental setup followed in this work. For a detailed description of the parameters and the relation of the samples considered for each experiment, please consult <a href="#app1-computation-09-00024" class="html-app">Appendix A</a>.</p>
Full article ">Figure 6
<p>Interaction plot of depth (10, 30, and 90 frames) and resolution (32 × 24, 40 × 30, 80 × 60, and 160 × 120 pixels) using the accuracy values obtained from experiment P01.</p>
Full article ">Figure 7
<p>Interaction plot of the proportion of the base set used for training (80%, 70%, and 60%) and resolution (32 × 24, 40 × 30, 80 × 60, and 160 × 120 pixels) using the accuracy values obtained from experiment P02.</p>
Full article ">Figure 8
<p>Interaction plot of depth (10, 30, and 90 frames) and resolution (32 × 24, 40 × 30, 80 × 60, and 160 × 120 pixels) using the accuracy values obtained from experiment P03.</p>
Full article ">Figure 9
<p>Interaction plot of depth (10, 30, and 90 frames) and resolution (32 × 24, 40 × 30, 80 × 60, and 160 × 120 pixels) using the accuracy values obtained from experiment P04 (using 60% of the dataset for training).</p>
Full article ">Figure 10
<p>Interaction plot of depth (10, 30, and 90 frames) and resolution (32 × 24, 40 × 30, 80 × 60, and 160 × 120 pixels) using the accuracy values obtained from experiment P04 (using 70% of the dataset for training).</p>
Full article ">Figure 11
<p>Confusion matrices for the best model generated for each configuration in the confirmatory experiment.</p>
Full article ">
Back to TopTop