Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (2)

Search Parameters:
Keywords = ABODA

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 3029 KiB  
Article
Enhanced Abandoned Object Detection through Adaptive Dual-Background Modeling and SAO-YOLO Integration
by Lei Zhou and Jingke Xu
Sensors 2024, 24(20), 6572; https://doi.org/10.3390/s24206572 - 12 Oct 2024
Cited by 2 | Viewed by 1079
Abstract
Abandoned object detection is a critical task in the field of public safety. However, existing methods perform poorly when detecting small and occluded objects, leading to high false detection and missed detection rates. To address this issue, this paper proposes an abandoned object [...] Read more.
Abandoned object detection is a critical task in the field of public safety. However, existing methods perform poorly when detecting small and occluded objects, leading to high false detection and missed detection rates. To address this issue, this paper proposes an abandoned object detection method that integrates an adaptive dual-background model with SAO-YOLO (Small Abandoned Object YOLO). The goal is to reduce false and missed detection rates for small and occluded objects, thereby improving overall detection accuracy. First, the paper introduces an adaptive dual-background model that adjusts according to scene changes, reducing noise interference in the background model. When combined with an improved PFSM (Pixel-based Finite State Machine) model, this enhances detection accuracy and robustness. Next, a network model called SAO-YOLO is designed. Key improvements within this model include the SAO-FPN (Small Abandoned Object FPN) feature extraction network, which fully extracts features of small objects, and a lightweight decoupled head, SODHead (Small Object Detection Head), which precisely extracts local features and enhances detection accuracy through multi-scale feature fusion. Finally, experimental results show that SAO-YOLO increases [email protected] and [email protected]:0.95 by 9.0% and 5.1%, respectively, over the baseline model. It outperforms other advanced detection models. Ultimately, after a series of experiments on the ABODA, PETS2006, and AVSS2007 datasets, the proposed method achieved an average detection precious of 91.1%, surpassing other advanced methods. It significantly outperforms other advanced detection methods. This approach notably reduces false and missed detections, especially for small and occluded objects. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the abandoned object detection method.</p>
Full article ">Figure 2
<p>The improved PFSM model.</p>
Full article ">Figure 3
<p>SAO-YOLO structure diagram. (The model primarily consists of three parts: backbone, neck, and head. Improvements in the backbone and neck are achieved through the proposed SAO-FPN structure, which enhances detection accuracy by adjusting model depth and increasing the scale of branches for small object layers).</p>
Full article ">Figure 4
<p>(<b>left</b>) The YOLOv5 feature extraction network structure diagram; (<b>right</b>) SAO-YOLO feature extraction network structure diagram. The proposed SAO-FPN feature extraction network reduces information loss by decreasing the overall network depth and adjusts the detection branches. By adding layers for small targets, it enhances the feature extraction capability for small objects.</p>
Full article ">Figure 5
<p>Basic structure diagram of the SODHead. SODHead mainly consists of 1 × 1 convolutions and the LFEM module. The high-level and low-level feature maps are preprocessed separately using 1 × 1 convolutions before being input into the LFEM module, which outputs the final feature map.</p>
Full article ">Figure 6
<p>Basic structure diagram of the LFEM module. The LFEM module inputs the preprocessed X2 and X1 separately. After operations such as padding, cropping, and concatenation, X1 generates the corresponding V and K vectors, while X2 serves directly as the Q vector. The Q and K vectors are multiplied and normalized to compute the corresponding scores, which are then multiplied by the V vector. The result is fused with X2 to obtain the final output feature map.</p>
Full article ">Figure 7
<p>(<b>a</b>) The figure shows the original video frame, the long-term background model output, and the short-term background model output for the second scene in the ABODA dataset, and (<b>b</b>) shows the original video frame, the long-term background model output, and the short-term background model output for the first scene in the PETS2006 dataset.</p>
Full article ">Figure 8
<p>(<b>a</b>) The original frame during a lighting change in the 7th scene of the ABODA dataset, (<b>b</b>) The output result of the traditional mixture of Gaussians model for the same frame in this scene, (<b>c</b>) The output result of the adaptive mixture of Gaussians model with the lighting change factor introduced for the same frame in this scene.</p>
Full article ">Figure 9
<p>(<b>a</b>) The original frame from the first scene of the ABODA dataset, (<b>b</b>) The output result of the traditional mixture of Gaussians model for the same frame in this scene, (<b>c</b>) The output result of the adaptive mixture of Gaussians model for the same frame in this scene.</p>
Full article ">Figure 10
<p>Visual comparison of experimental results. (<b>a</b>,<b>b</b>) are both sourced from the VisDrone dataset, where Figure (<b>a</b>) illustrates the detection performance for small objects, and Figure (<b>b</b>) shows the detection performance for occluded objects.</p>
Full article ">
17 pages, 13902 KiB  
Article
Robust Detection of Abandoned Object for Smart Video Surveillance in Illumination Changes
by Hyeseung Park, Seungchul Park and Youngbok Joo
Sensors 2019, 19(23), 5114; https://doi.org/10.3390/s19235114 - 22 Nov 2019
Cited by 16 | Viewed by 5924
Abstract
Most existing abandoned object detection algorithms use foreground information generated from background models. Detection using the background subtraction technique performs well under normal circumstances. However, it has a significant problem where the foreground information is gradually absorbed into the background as time passes [...] Read more.
Most existing abandoned object detection algorithms use foreground information generated from background models. Detection using the background subtraction technique performs well under normal circumstances. However, it has a significant problem where the foreground information is gradually absorbed into the background as time passes and disappears, making it very vulnerable to sudden illumination changes that increase the false alarm rate. This paper presents an algorithm for detecting abandoned objects using a dual background model, which is robust even in illumination changes as well as other complex circumstances like occlusion, long-term abandonment, and owner re-attendance. The proposed algorithm can adapt quickly to various illumination changes. And also, it can precisely track the target objects to determine whether it is abandoned regardless of the existence of foreground information and the effect from the illumination changes, thanks to the largest-contour-based presence authentication mechanism proposed in this paper. For performance evaluation, we trialed the algorithm with the PETS2006, ABODA datasets as well as our dataset, especially to demonstrate its robustness in various illumination changes. Full article
Show Figures

Figure 1

Figure 1
<p>Framework of the proposed algorithm.</p>
Full article ">Figure 2
<p>Abandoned object detection in PETS2006 Scenario 7. (VF, SF, LF, and DF represent video frame, shot-term foreground, long-term foreground, and difference foreground, respectively).</p>
Full article ">Figure 3
<p>Illumination changes in ABODA video 7 without our illumination change adaptation technique.</p>
Full article ">Figure 4
<p>This shows the test result for ABODA video 6 without illumination change adaptation technique.</p>
Full article ">Figure 5
<p>Illumination change handling in ABODA video 6 and video 7.</p>
Full article ">Figure 6
<p>Histogram of the whole video frame image with an illumination change in ABODA video 7.</p>
Full article ">Figure 7
<p>Abandoned object detection in ABODA video 7.</p>
Full article ">Figure 8
<p>Outdoor illumination changes in KICV (Koreatech illumination change video) video 1.</p>
Full article ">Figure 9
<p>Indoor illumination changes in KICV video 2.</p>
Full article ">
Back to TopTop