CN108280841A - A kind of foreground extracting method based on neighborhood territory pixel intensity correction - Google Patents
A kind of foreground extracting method based on neighborhood territory pixel intensity correction Download PDFInfo
- Publication number
- CN108280841A CN108280841A CN201810039690.XA CN201810039690A CN108280841A CN 108280841 A CN108280841 A CN 108280841A CN 201810039690 A CN201810039690 A CN 201810039690A CN 108280841 A CN108280841 A CN 108280841A
- Authority
- CN
- China
- Prior art keywords
- background
- method based
- intensity correction
- image
- pixel intensity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000004364 calculation method Methods 0.000 claims abstract description 22
- 238000000605 extraction Methods 0.000 claims description 34
- 239000011159 matrix material Substances 0.000 claims description 3
- 239000000203 mixture Substances 0.000 claims description 3
- 238000001514 detection method Methods 0.000 abstract description 7
- 238000004422 calculation algorithm Methods 0.000 abstract description 6
- 230000003628 erosive effect Effects 0.000 description 12
- 238000012545 processing Methods 0.000 description 10
- 230000003287 optical effect Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 7
- 230000007797 corrosion Effects 0.000 description 5
- 238000005260 corrosion Methods 0.000 description 5
- 238000011410 subtraction method Methods 0.000 description 5
- 239000000284 extract Substances 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 230000010339 dilation Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000005530 etching Methods 0.000 description 2
- 238000005206 flow analysis Methods 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 230000005856 abnormality Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明涉及机器视觉及图像目标检测的技术领域,特别是一种基于邻域像素强度校正的前景提取方法。The invention relates to the technical field of machine vision and image target detection, in particular to a foreground extraction method based on neighborhood pixel intensity correction.
背景技术Background technique
目前,监控视频信息的自动处理与预测在信息科学、计算机视觉、机器学习、模式识别等多个领域中受到极大的关注。而如何有效、快速抽取出监控视频中的前景目标信息,是其中非常重要而基础的问题。视频序列中运动目标的提取是指从视频序列中将运动的区域分割出来,通过对目标运动行为的估计,预测目标在下一帧图像中的物理特征,并根据这些特征对图像序列中的目标进行关联匹配,得到运动目标的运动轨迹。它结合了计算机图像处理、视频图像处理、模式识别、人工智能以及自动控制等诸多相关领域的知识,是智能视频监控的关键技术。At present, the automatic processing and prediction of surveillance video information has received great attention in many fields such as information science, computer vision, machine learning, and pattern recognition. How to effectively and quickly extract the foreground target information in the surveillance video is a very important and basic problem. The extraction of the moving target in the video sequence refers to segmenting the moving area from the video sequence, predicting the physical characteristics of the target in the next frame of image by estimating the target’s motion behavior, and performing the target extraction in the image sequence according to these features. Correlation matching to get the trajectory of the moving target. It combines the knowledge of computer image processing, video image processing, pattern recognition, artificial intelligence and automatic control and many other related fields, and is the key technology of intelligent video surveillance.
运动目标提取方面,主要有背景减除法、帧差法和光流法。背景减除法是最常用的方法。它是把当前帧与提取的背景相减,直接获得前景运动目标,适用于背景已知的情况,计算简单。然而实际应用中,背景往往是不固定的,需要进行背景模型的动态更新。In terms of moving target extraction, there are mainly background subtraction method, frame difference method and optical flow method. Background subtraction is the most commonly used method. It subtracts the current frame from the extracted background to directly obtain the foreground moving target, which is suitable for the situation where the background is known, and the calculation is simple. However, in practical applications, the background is often not fixed, and the background model needs to be updated dynamically.
Stauffer C和Grimson W在《Adaptive background mixture models for real-time tracking》一文中利用混合高斯分布对每个像素建模,并利用在线估计更新模型。文献《Improved adaptive Gaussian mixture model for background subtraction》使用递归滤波器对背景图像滤波更新,增加了背景减除法的稳定性。文献《Efficienthierarchical method for background subtraction》将基于像素和基于块方法结合,建立有效的分级的背景图像,首先确定出非固定背景。帧差法是把当前帧和前一帧直接做差,在时间上相邻的两幅图像的像素做绝对差可以将图像中目标的位置和形状显示出来。此方法对环境的适应性强,但是提取的目标易于形成空洞和模糊边缘。文献《一种基于帧间差分与背景差分的运动目标检测新方法》把背景减除法和帧差法结合起来,基于帧差法检测出帧中背景像素点后,运用背景减除法检测出运动目标,克服了误检和空洞问题。文献《视频序列中运动目标检测技术》采用连续帧间差分法提取运动区域,并通过数学形态滤波消除噪声,改善了运动区域的提取效果。光流法是帧之间的差分法的反复迭代,与帧差法基本原理相似,但是其误差随运动目标速度的增加呈非线性递增趋势。另外,光流法计算耗时,无法满足很多图像序列运动目标检测跟踪系统对算法实时性的要求。针对这一缺点,文献《基于光流的人体运动实时检测方法》提出了基于差分图像绝对值和SAD与光流法结合的方法,能有效准确地对运动目标进行检测。文献《基于帧间差的区域光流分析及其应用》分析比较了帧差法,背景减除法和光流法各自的优缺点,提出了一种基于联合帧间差的区域光流分析方法,提高了处理速度并降低了光流法的计算代价。In the article "Adaptive background mixture models for real-time tracking" by Stauffer C and Grimson W, a mixed Gaussian distribution is used to model each pixel, and the model is updated using online estimation. The document "Improved adaptive Gaussian mixture model for background subtraction" uses a recursive filter to filter and update the background image, which increases the stability of the background subtraction method. The document "Efficient archical method for background subtraction" combines pixel-based and block-based methods to establish an effective graded background image, and first determines the non-fixed background. The frame difference method is to directly make the difference between the current frame and the previous frame, and the absolute difference between the pixels of the two adjacent images in time can display the position and shape of the target in the image. This method has strong adaptability to the environment, but the extracted objects tend to form holes and blurred edges. The document "A New Method for Moving Target Detection Based on Inter-Frame Difference and Background Difference" combines the background subtraction method and the frame difference method. After detecting the background pixels in the frame based on the frame difference method, the background subtraction method is used to detect the moving target. , to overcome the false detection and void problems. The document "Moving Target Detection Technology in Video Sequence" uses the continuous frame difference method to extract the moving area, and eliminates the noise through mathematical morphological filtering, which improves the extraction effect of the moving area. The optical flow method is a repeated iteration of the difference method between frames, which is similar to the basic principle of the frame difference method, but its error shows a nonlinear increasing trend with the increase of the speed of the moving target. In addition, the calculation of optical flow method is time-consuming, which cannot meet the real-time requirements of many image sequence moving target detection and tracking systems. In response to this shortcoming, the document "Real-time Detection Method of Human Movement Based on Optical Flow" proposed a method based on the absolute value of difference image and SAD combined with optical flow method, which can effectively and accurately detect moving objects. The literature "Regional Optical Flow Analysis Based on Frame Difference and Its Application" analyzes and compares the advantages and disadvantages of frame difference method, background subtraction method and optical flow method, and proposes a regional optical flow analysis method based on joint frame difference to improve The processing speed is improved and the calculation cost of the optical flow method is reduced.
发明内容Contents of the invention
为了解决上述的技术问题,本发明提出一种基于邻域像素强度校正的前景提取方法,采用动态场景下基于邻域像素强度校正的前景检测算法,鲁棒性更强,适用于较复杂的变化场景。In order to solve the above technical problems, the present invention proposes a foreground extraction method based on neighborhood pixel intensity correction, which adopts a foreground detection algorithm based on neighborhood pixel intensity correction in dynamic scenes, which is more robust and suitable for more complex changes Scenes.
本发明提供一种基于邻域像素强度校正的前景提取方法,包括输入视频,还包括以下步骤:The present invention provides a foreground extraction method based on neighborhood pixel intensity correction, including an input video, and further includes the following steps:
步骤1:根据所述输入视频的帧序列的第一帧获取初始背景;Step 1: Acquire the initial background according to the first frame of the frame sequence of the input video;
步骤2:对所述初始背景进行前景和背景的差分;Step 2: performing a difference between the foreground and the background on the initial background;
步骤3:设定图像差异阈值进行稳定性计算;Step 3: Set the image difference threshold for stability calculation;
步骤4:采用大津阈值来来确定最佳阈值,并将目标从背景图像提取出来。Step 4: Use the Otsu threshold to determine the optimal threshold and extract the target from the background image.
优选的是,在所述步骤1中计算B1(x,y)=F1(x,y),其中B1(x,y)和F1(x,y)分别是在坐标点(x,y)处的背景像素值和当前帧在该点的像素值,x≤P,y≤Q,P和Q分别表示所述初始背景的宽度和高度。Preferably, B 1 (x, y)=F 1 (x, y) is calculated in said step 1, wherein B 1 (x, y) and F 1 (x, y) are respectively at the coordinate point (x , the background pixel value at y) and the pixel value of the current frame at this point, x≤P, y≤Q, P and Q represent the width and height of the initial background, respectively.
在上述任一方案中优选的是,对于第i帧,用于所述差分的背景图像Bi可由Bi-1得到,其中, In any of the above schemes, preferably, for the i-th frame, the background image B i used for the difference can be obtained from B i-1 , where,
在上述任一方案中优选的是,所述步骤2包括当前背景与当前帧之间的差异Di计算公式为Di=|Fi(x,y)-Bi-1(x,y)|,其中, In any of the above schemes, it is preferred that the step 2 includes calculating the difference D i between the current background and the current frame as D i =|F i (x, y)-B i-1 (x, y) |, where
在上述任一方案中优选的是,所述步骤2还包括将所述图像差异Di从背景图像和运动目标中分离出来,公式为其中,τ为自行设置的阈值。Preferably in any of the above schemes, the step 2 also includes separating the image difference D i from the background image and the moving target, the formula is Among them, τ is the threshold value set by itself.
在上述任一方案中优选的是,所述步骤3包括设置S1(x,y)的初始值为零,即S1(x,y)=0,并计算每个像素点的稳定性S(x,y),计算公式为 其中,Si表示第i帧是的稳定矩阵,其初始值为零,即S1(x,y)=0。In any of the above schemes, preferably, the step 3 includes setting the initial value of S 1 (x, y) to zero, that is, S 1 (x, y)=0, and calculating the stability S of each pixel (x, y), the calculation formula is Wherein, S i represents the stability matrix of frame i, whose initial value is zero, that is, S 1 (x, y)=0.
在上述任一方案中优选的是,所述步骤2包括判断是否进行像素值的校正。In any of the solutions above, preferably, the step 2 includes judging whether to correct the pixel value.
在上述任一方案中优选的是,所述判断方法为当所述稳定性的最小值min(x,y)(Si(x,y))小于阈值δ时,则对当前背景像素值进行修正;当所述稳定性的最小值min(x,y)(Si(x,y))大于等于阈值δ时,则不对当前背景像素值进行修正。In any of the above schemes, preferably, the judging method is that when the minimum value min (x, y) (S i (x, y)) of the stability is smaller than the threshold δ, then the current background pixel value is Correction; when the minimum value of the stability min (x, y) (S i (x, y)) is greater than or equal to the threshold δ, the current background pixel value is not corrected.
在上述任一方案中优选的是,当属于当前运动目标时设置Di(x,y)=1且存在不稳定的值使得Si(x,y)<0时,重新进行构造减少计算量,计算公式为Pi={(x,y)|[Di(x,y)=1]∩[Si(x,y)<0]}。In any of the above schemes, it is preferable to set D i (x, y) = 1 when it belongs to the current moving target and there is an unstable value such that S i (x, y) < 0, re-construct to reduce the amount of calculation , the calculation formula is P i ={(x, y)|[D i (x, y)=1]∩[S i (x, y)<0]}.
在上述任一方案中优选的是,所述步骤3包括计算对应像素点之间的标准方差,计算公式为其中,n是一个正方形移动窗口的大小,N=n2表示像素点的数目,均值μ是由图像I的像素值计算得到,公式为 Preferably in any of the above schemes, the step 3 includes calculating the standard deviation between corresponding pixel points, and the calculation formula is Among them, n is the size of a square moving window, N= n2 represents the number of pixels, and the mean value μ is calculated by the pixel value of image I, and the formula is
在上述任一方案中优选的是,对于Pi的每一个像素,从背景图像和当前帧计算两个标准差和并计算得到背景像素值Bi(x,y),公式为:In any of the above schemes it is preferred that, for each pixel of Pi , two standard deviations are calculated from the background image and the current frame and And calculate the background pixel value B i (x, y), the formula is:
在上述任一方案中优选的是,所述正方形移动窗口的大小可以在运动中发生变化,包括了(n0+n1)个像素点,n0和n1分别表示非运动像素点和运动像素点的数量,并且对应的像素强度分别为g0和g1。In any of the above schemes, preferably, the size of the square moving window can change during motion, including (n 0 +n 1 ) pixels, where n 0 and n 1 represent non-moving pixels and moving pixels respectively. The number of pixels, and the corresponding pixel intensities are g 0 and g 1 respectively.
在上述任一方案中优选的是,所述像素均值μ的计算公式变为 In any of the above schemes, preferably, the calculation formula of the pixel mean value μ becomes
在上述任一方案中优选的是,所述标准方差的计算公式变为:Preferably in any of the above schemes, the formula for calculating the standard deviation becomes:
其中,|g0-g1|和(n0+n1)为常数。Wherein, |g 0 −g 1 | and (n 0 +n 1 ) are constants.
在上述任一方案中优选的是,所述步骤3还包括在像素值强度校正之后,将检测到的背景用于后续的前景提取,并将处理结果作为后续第i+1帧的输入。In any of the solutions above, preferably, the step 3 further includes using the detected background for subsequent foreground extraction after pixel value intensity correction, and using the processing result as an input of the subsequent i+1th frame.
在上述任一方案中优选的是,所述步骤4还包括从当前帧和背景图像中得到基于图像的差异计算公式为 In any of the above schemes, preferably, the step 4 also includes obtaining an image-based difference from the current frame and the background image The calculation formula is
在上述任一方案中优选的是,所述步骤4还包括寻找合理阈值,使得两个类的方差的加权和最小化,公式为其中,G0为背景区域,G1为前景目标区域,ωG0(g)和ωG1(g)表示在强度g处的类概率,和为类方差。Preferably in any of the above schemes, the step 4 also includes finding a reasonable threshold so that the weighted sum of the variances of the two classes is minimized, the formula is where G 0 is the background region, G 1 is the foreground object region, ωG 0 (g) and ωG 1 (g) denote the class probability at intensity g, and is the class variance.
在上述任一方案中优选的是,所述步骤4还包括基于图像的差异分离出前景目标,使用大津阈值法来确定最佳的阈值,公式为 In any of the above schemes, it is preferred that the step 4 also includes image-based difference Separate the foreground target, use the Otsu threshold method to determine the best threshold, the formula is
在上述任一方案中优选的是,所述图像差异Di的计算公式变为:In any of the above schemes, preferably, the calculation formula of the image difference D i becomes:
其中,为灰度图像。 in, is a grayscale image.
在上述任一方案中优选的是,所述步骤4还包括对由于亮度、光照急剧变化引起的边缘断裂进行基本的腐蚀和膨胀处理。In any of the above-mentioned schemes, it is preferred that the step 4 also includes performing basic erosion and expansion treatment on edge fractures caused by sharp changes in brightness and illumination.
在上述任一方案中优选的是,所述腐蚀的作用是消除物体的边界点,所述腐蚀的原理为:其中,A为平面(x,y)上的目标区域,S为指定大小和形状的模板元素,S(x,y)是位于坐标(x,y)上的模板元素S所表示的区域,S(x,y)/A是差集,表示属于S而不属于A的元素的集合。Preferably in any of the above schemes, the effect of the corrosion is to eliminate the boundary points of the object, and the principle of the corrosion is: Among them, A is the target area on the plane (x, y), S is the template element of the specified size and shape, S(x, y) is the area represented by the template element S located on the coordinates (x, y), S (x, y)/A is a difference set, which means a set of elements that belong to S but not to A.
在上述任一方案中优选的是,所述膨胀的作用是扩展物体的边界点,所述膨胀的原理为:其中,A为(x,y)图像帧中的目标区域,S为指定大小的结构元素,定义位于坐标(x,y)上的结构元素S所表示的区域为S(x,y),S(x,yv∩A是交集,表示同时属于S和A的集合。In any of the above schemes, it is preferred that the effect of the expansion is to expand the boundary point of the object, and the principle of the expansion is: Among them, A is the target area in the (x, y) image frame, S is the structural element of the specified size, and the area represented by the structural element S on the coordinate (x, y) is defined as S(x, y), S (x, yv∩A is an intersection, which means a set belonging to both S and A.
本发明提出了一种基于邻域像素强度校正的前景提取方法,能够准确地提取前景目标,算法思路清晰,实现简单,且能有效抵抗场景光照变化,有效消除摄像头抖动对前景目标提取的影响,对动态场景具有较强的自适应性。The present invention proposes a foreground extraction method based on neighborhood pixel intensity correction, which can accurately extract foreground objects, has clear algorithm ideas, is simple to implement, and can effectively resist changes in scene illumination and effectively eliminate the influence of camera shake on foreground object extraction. It has strong adaptability to dynamic scenes.
附图说明Description of drawings
图1为按照本发明的基于邻域像素强度校正的前景提取方法的一优选实施例的流程图。FIG. 1 is a flowchart of a preferred embodiment of a foreground extraction method based on neighborhood pixel intensity correction according to the present invention.
图2为按照本发明的基于邻域像素强度校正的前景提取方法的一优选实施例的腐蚀前原始图。Fig. 2 is an original image before erosion of a preferred embodiment of the foreground extraction method based on neighborhood pixel intensity correction according to the present invention.
图2A为按照本发明的基于邻域像素强度校正的前景提取方法的如图2所示实施例的S1模板图。FIG. 2A is an S1 template diagram of the embodiment shown in FIG. 2 of the foreground extraction method based on neighborhood pixel intensity correction according to the present invention.
图2B为按照本发明的基于邻域像素强度校正的前景提取方法的如图2所示实施例的按照S1模板腐蚀后效果图。FIG. 2B is an effect diagram after erosion according to the S1 template of the embodiment of the foreground extraction method based on neighborhood pixel intensity correction as shown in FIG. 2 according to the present invention.
图2C为按照本发明的基于邻域像素强度校正的前景提取方法的如图2所示实施例的S2模板图。FIG. 2C is an S2 template diagram of the embodiment shown in FIG. 2 of the foreground extraction method based on neighborhood pixel intensity correction according to the present invention.
图2D为按照本发明的基于邻域像素强度校正的前景提取方法的如图2所示实施例的按照S2模板腐蚀后效果图。FIG. 2D is an effect diagram after erosion according to the S2 template of the embodiment of the foreground extraction method based on neighborhood pixel intensity correction as shown in FIG. 2 according to the present invention.
图3为按照本发明的基于邻域像素强度校正的前景提取方法的一优选实施例的膨胀前原始图。Fig. 3 is an original image before dilation of a preferred embodiment of the foreground extraction method based on neighborhood pixel intensity correction according to the present invention.
图3A为按照本发明的基于邻域像素强度校正的前景提取方法的如图3所示实施例的S3模板图。FIG. 3A is an S3 template diagram of the embodiment shown in FIG. 3 of the foreground extraction method based on neighborhood pixel intensity correction according to the present invention.
图3B为按照本发明的基于邻域像素强度校正的前景提取方法的如图3所示实施例的按照S3模板膨胀后效果图。FIG. 3B is an effect diagram after dilation according to the S3 template of the embodiment of the foreground extraction method based on neighborhood pixel intensity correction as shown in FIG. 3 according to the present invention.
图3C为按照本发明的基于邻域像素强度校正的前景提取方法的如图3所示实施例的S4模板图。FIG. 3C is an S4 template diagram of the embodiment shown in FIG. 3 of the foreground extraction method based on neighborhood pixel intensity correction according to the present invention.
图3D为按照本发明的基于邻域像素强度校正的前景提取方法的如图3所示实施例的按照S4模板膨胀后效果图。FIG. 3D is an effect diagram after template expansion according to S4 of the embodiment of the foreground extraction method based on neighborhood pixel intensity correction as shown in FIG. 3 according to the present invention.
具体实施方式Detailed ways
下面结合附图和具体的实施例对本发明做进一步的阐述。The present invention will be further elaborated below in conjunction with the accompanying drawings and specific embodiments.
实施例一Embodiment one
如图1所示,执行步骤100,提取帧序列。基于邻域像素强度校正的连续模型通过更新当前帧的运动信息并从背景前景提取运动目标。执行步骤105,从视频输入的第一帧作为初始背景,如式(1)所示。As shown in FIG. 1 , step 100 is executed to extract a frame sequence. A continuous model based on neighborhood pixel intensity correction extracts moving objects from the background foreground by updating the motion information of the current frame. Step 105 is executed, and the first frame input from the video is used as the initial background, as shown in formula (1).
B1(x,y)=F1(x,y),式(1)B 1 (x, y) = F 1 (x, y), formula (1)
其中B1(x,y)和F1(x,y)分别是在坐标点(x,y)处的背景像素值和当前帧在该点的像素值,x≤P,y≤Q,P和Q分别表示所述初始背景的宽度和高度。对于第i帧用于所述差分的背景图像Bi可由Bi-1得到。首先,当前背景与当前帧之间的差异Di由灰度图像计算给出,Di由等式(2)得出,Where B 1 (x, y) and F 1 (x, y) are the background pixel value at the coordinate point (x, y) and the pixel value of the current frame at this point, x≤P, y≤Q, P and Q denote the width and height of the initial background, respectively. for frame i The background image B i used for the difference can be obtained from B i-1 . First, the difference D i between the current background and the current frame is given by the grayscale image calculation, D i is given by equation (2),
图像差异Di包含了运动目标和噪声,所以,Di需要由阈值τ从背景图像和运动目标中分离出来,如式(3)所示。The image difference D i contains the moving target and noise, so D i needs to be separated from the background image and the moving target by the threshold τ, as shown in formula (3).
原则上,运动目标与噪声、阴影有着显著的差异,较高的τ值能够去除噪声,但是,有些运动的像素点会被误判为噪声点;相反,如果τ太小,噪声点会被误判为运动目标点。因此,τ会直接影响Di,需要在实验部分需要设置合适的τ值。对于运动的像素点处Di=1,为了尽可能的减少外形上的异常,我们考虑每个像素点的稳定性S(x,y)。S(x,y)由像素值改变的次数来计算并在每一帧进行更新。如果连续的两帧像素值都发生了改变,则该像素点不稳定,应当保留原来的值。每个像素点的稳定性由公式(4)根据连续的相邻帧计算得出,In principle, moving objects are significantly different from noise and shadows, and a higher τ value can remove noise, but some moving pixels will be misjudged as noise points; on the contrary, if τ is too small, noise points will be misjudged It is judged as the moving target point. Therefore, τ will directly affect D i , and an appropriate value of τ needs to be set in the experimental part. For a moving pixel where D i =1, in order to reduce the abnormality of the shape as much as possible, we consider the stability S(x, y) of each pixel. S(x,y) is calculated by the number of times the pixel value changes and is updated every frame. If the pixel values in two consecutive frames have changed, the pixel is unstable and the original value should be retained. The stability of each pixel is calculated by formula (4) based on consecutive adjacent frames,
其中,Si表示第i帧是的稳定矩阵,其初始值为零,即S1(x,y)=0。通过帧数的累积计算可知,非运动目标像素点的稳定值会比运动目标像素点的稳定值大。Wherein, S i represents the stability matrix of frame i, whose initial value is zero, that is, S 1 (x, y)=0. Through the cumulative calculation of the number of frames, it can be seen that the stable value of the non-moving target pixel is larger than the stable value of the moving target pixel.
在计算过程中,如果当前背景接近真实背景,像素值的校正并不一定会降低计算的效率。具体判断方法是,如果稳定性的最小值比给定的稳定阈值δ大,即min(x,y)(Si(x,y))≥δ,则不再考虑像素值修正的过程。因此当前背景保持到下一帧。例如,Bi=Bi-1会赋值给第i+1帧,然后直接进入前景目标提取的阶段。相反,如果min(x,y)(Si(x,y))<δ,则进行对当前背景像素值的修正。具体过程可以表示如下:In the calculation process, if the current background is close to the real background, the correction of the pixel value will not necessarily reduce the calculation efficiency. The specific judgment method is that if the minimum value of stability is greater than the given stability threshold δ, that is, min (x, y) (S i (x, y))≥δ, then the process of pixel value correction is no longer considered. So the current background remains until the next frame. For example, B i =B i-1 will be assigned to the i+1th frame, and then directly enter the stage of foreground object extraction. On the contrary, if min (x, y) (S i (x, y))<δ, the current background pixel value is corrected. The specific process can be expressed as follows:
当需要修正时,且两种情况都存在的时候如何校正像素值。How to correct pixel values when correction is required and both conditions exist.
(1)属于当前运动目标时设置Di(x,y)=1;(1) Set D i (x, y) = 1 when it belongs to the current moving target;
(2)存在不稳定的值使得Si(x,y)<0,对这种情况用式(6)来重新进行构造减少计算量。(2) There is an unstable value such that S i (x, y)<0, and in this case, formula (6) is used to re-construct to reduce the amount of calculation.
Pi{(x,y)|[Di(x,y)]∩[Si(x,y)<0]},式(6)P i {(x, y)|[D i (x, y)]∩[S i (x, y)<0]}, formula (6)
其中,Pi是一系列要滤波的像素点。通过比较背景图像和当前帧图像的每个像素点之间的差异用来校正。然后计算对应像素点之间的标准方差,如果计算值很小,则说明像素值接近平均值。反之,像素值则比较分散。对于一个正方形的移动窗口,标准差σ由下面的等式(7)计算得到。Wherein, P i is a series of pixel points to be filtered. It is used for correction by comparing the difference between each pixel of the background image and the current frame image. Then calculate the standard deviation between the corresponding pixel points, if the calculated value is small, it means that the pixel value is close to the average value. On the contrary, the pixel values are scattered. For a square moving window, the standard deviation σ is calculated by Equation (7) below.
其中,n是一个正方形移动窗口的大小,N=n2表示像素点的数目,均值μ是由图像I的像素值计算得到,如式(8)所示。Among them, n is the size of a square moving window, N=n 2 represents the number of pixels, and the mean value μ is calculated from the pixel values of image I, as shown in formula (8).
对于Pi的每个像素,需要从背景图像和当前帧计算两个标准差和根据和这两个标准方差计算背景像素值Bi(x,y),如式(9)所示。For each pixel of Pi , two standard deviations need to be calculated from the background image and the current frame and according to and These two standard deviations calculate the background pixel value B i (x, y), as shown in equation (9).
移动窗口的大小(n*n)可以在运动过程中发生变化。移动窗口的大小会影响校正像素值的结果。设定参数来说明该影响结果,n0和n1分别表示非运动像素点和运动像素点的数量,并且对应的像素强度分别为g0和g1。所以,整个移动窗口包含了(n0+n1)个像素点。像素均值的计算由等式(8)变为等式(10)。The size (n*n) of the moving window can change during the motion. The size of the moving window affects the result of correcting pixel values. Parameters are set to illustrate the impact result, n 0 and n 1 represent the number of non-moving pixels and moving pixels respectively, and the corresponding pixel intensities are g 0 and g 1 respectively. Therefore, the entire moving window includes (n 0 +n 1 ) pixels. The calculation of the pixel mean is changed from Equation (8) to Equation (10).
标准方差计算公式变为等式(11)。The formula for calculating the standard deviation becomes Equation (11).
其中,|g0-g1|和(n0+n1)都是常数,所以等式(11)由来决定。因此,标准差由非运动像素点和运动像素点的数量来决定,近而来影响邻域像素强度校正模型算法。因此,修改标准差参数能够适应多种背景场景,如低,中,高速物体的运动。where |g 0 -g 1 | and (n 0 +n 1 ) are both constants, so equation (11) is given by to decide. Therefore, the standard deviation is determined by the number of non-moving pixels and moving pixels, and then affects the neighborhood pixel intensity correction model algorithm. Therefore, modifying the standard deviation parameters can adapt to various background scenes, such as low, medium, and high-speed object motion.
在像素值强度校正之后,邻域像素强度校正模型算法将检测到的背景用于后续的前景提取,并将处理结果作为后续第i+1帧的输入。After pixel value intensity correction, the neighborhood pixel intensity correction model algorithm uses the detected background for subsequent foreground extraction, and takes the processing result as the input of the subsequent i+1th frame.
通过上述检测到的背景,下面进入提取前景目标的阶段。前景提取的主要思想是基于自适应的背景差分算法。差异是从当前帧与背景图像中得到的,通过等式(2)得到下式(12):Through the background detected above, the next step is to extract the foreground target. The main idea of foreground extraction is based on adaptive background difference algorithm. The difference is obtained from the current frame and the background image through equation (2) to get the following equation (12):
基于图像的差异分离出前景目标,使用大津阈值法来确定最佳的阈值,从而将目标从背景图像中提取出来。G0为背景区域,G1为前景目标区域;通过最小化类内方差,阈值可以减少分类错误。寻找合理阈值,使得两个类的方差的加权和最小化,如式(13)所示。image based diff The foreground object is separated, and the Otsu threshold method is used to determine the best threshold to extract the object from the background image. G 0 is the background region and G 1 is the foreground object region; thresholding can reduce classification errors by minimizing the intra-class variance. Find a reasonable threshold to minimize the weighted sum of the variances of the two classes, as shown in equation (13).
ωG0(g)和ωG1(g)表示在强度g处的类概率,和为类方差。根据文献[13]阈值定义为式(14),如下所示。ωG 0 (g) and ωG 1 (g) denote the class probability at intensity g, and is the class variance. According to literature [13] the threshold is defined as formula (14), as shown below.
则公式(3)变为式(15),如下所示。Then formula (3) becomes formula (15), as shown below.
产生的为灰度图像。前景中可能会有由于亮度、光照急剧变化引起的边缘断裂,对其进行基本的腐蚀和膨胀处理即可。produced is a grayscale image. In the foreground, there may be edge breaks caused by sharp changes in brightness and illumination, and basic erosion and expansion can be performed on them.
在数学形态学运算中腐蚀的作用是消除物体的边界点。在图像处理中,对于小于结构元素的点是可以通过腐蚀运算消除,即可以通过腐蚀处理,对目标区域中含有细小连接部分的区域进行分割。The role of erosion in mathematical morphology operations is to eliminate the boundary points of objects. In image processing, points smaller than structural elements can be eliminated by erosion operation, that is, the area containing small connected parts in the target area can be segmented by erosion processing.
腐蚀原理如下:The principle of corrosion is as follows:
其中,A为平面(x,y)上的目标区域,S为指定大小和形状的模板元素,S(x,y)是位于坐标(x,y)上的模板元素S所表示的区域,S(x,y)/A是差集,表示属于S而不属于A的元素的集合。Among them, A is the target area on the plane (x, y), S is the template element of the specified size and shape, S(x, y) is the area represented by the template element S located on the coordinates (x, y), S (x, y)/A is a difference set, which means a set of elements that belong to S but not to A.
在图像处理中可以理解为,定义一个模板S,在图像中按照从左到右,从上到下的顺序扫描每个像素点,当位于某个像素点时,模板完全位于目标像素中,则保留当前像素点,否则,删除当前点。In image processing, it can be understood that a template S is defined, and each pixel is scanned in the image from left to right and from top to bottom. When it is located at a certain pixel, the template is completely located in the target pixel, then Keep the current pixel point, otherwise, delete the current point.
图像的膨胀在形态学运算中膨胀的作用是扩展物体的边界点。通过膨胀运算可以通过特定的结构元素对一些相邻较近的区域进行连接,不过,膨胀也会使得一些小的杂点和敏感点放大,膨胀原理如下:The expansion of the image in the morphological operation is to expand the boundary points of the object. Through the expansion operation, some adjacent areas can be connected through specific structural elements. However, the expansion will also enlarge some small noise points and sensitive points. The principle of expansion is as follows:
其中,A为(x,y)图像帧中的目标区域,S为指定大小的结构元素,定义位于坐标(x,y)上的结构元素S所表示的区域为S(x,y),S(x,y)∩A是交集,表示同时属于S和A的集合。从图像的左上角开始,按扫描顺序,当结构元素位于某像素点上时,结构元素与目标有交集,则保留此像素点,否则删除此像素点。Among them, A is the target area in the (x, y) image frame, S is the structural element of the specified size, and the area represented by the structural element S on the coordinate (x, y) is defined as S(x, y), S (x, y)∩A is an intersection set, which means a set belonging to both S and A. Starting from the upper left corner of the image, according to the scanning order, when the structural element is located on a certain pixel and there is an intersection between the structural element and the target, then keep this pixel, otherwise delete this pixel.
实施例二Embodiment two
腐蚀的实现过程如下:模板S为一个3*3大小的十字型结构元素。被腐蚀的目标区域A(x,y)是一个8*8的二值化后的图像,逐个像素点进行扫描腐蚀的过程中,若当前扫描的像素点A(x,y)满足像素值为1(假设1代表图中的黑色区域的像素值),八邻域内上下左右直接相邻的像素点的值为1,其余值为O的像素点,则认为与模板S相匹配,A(x,y)值设为1,否则设置为0,如A(4,4)的八邻域为第一个与模板S的值完全相匹配的像素点,则经过腐蚀运算后其值为1。采用不同的模板对图像进行腐蚀操作的到的结果是不相同的,图2为腐蚀前的图像A,图2A为模板S1,使用模板S1对图像A进行腐蚀后的得到图像2B。图2C为模板S2,使用模板S2对图像A进行腐蚀后的得到图像2D。The implementation process of corrosion is as follows: the template S is a cross-shaped structural element with a size of 3*3. The corroded target area A(x, y) is an 8*8 binarized image. In the process of scanning and corroding pixel by pixel, if the currently scanned pixel A(x, y) satisfies the pixel value 1 (assuming that 1 represents the pixel value of the black area in the figure), the values of the pixels directly adjacent up, down, left, and right in the eight-neighborhood are 1, and the other pixels with values of 0 are considered to match the template S, A(x , y) is set to 1, otherwise it is set to 0, if the eight neighbors of A(4, 4) are the first pixel that completely matches the value of the template S, then its value is 1 after the erosion operation. Using different templates to corrode images results in different results. Figure 2 shows image A before corrosion, and figure 2A shows template S1. Image 2B is obtained after using template S1 to corrode image A. FIG. 2C is a template S2, and an image 2D is obtained after using the template S2 to corrode the image A.
实施例三Embodiment Three
数字图像的膨胀处理同图像腐蚀处理一样,同样是借助模板实现的,用模板表示结构元素的形状,可以把膨胀过程描述为:在目标区域中移动模板使之遍历目标图像的每一个像素,当模板处于像素坐标(x,y)处时,若模板中任意为1的位置对应的像素为1,则膨胀处理后坐标(x,y)对应位置的像素值设置为1,否则坐标(x,y)对应位置的像素设置为0。同样和图像的腐蚀一样,模板的大小不同,形状不同得到的膨胀后的图像与原标图像也各不相同。和腐蚀不同的是,膨胀并不一定将原目标完全包含,如果定义的模板经过模板中心,即模板中心为1,则膨胀后的图像是原目标的完全扩充,如果模板中不为1,则膨胀后的图像并不是原目标的完全扩充,而是腐蚀掉了一些原有的像素点的扩充。图3为腐蚀前的图像B,图3A为模板S3,使用模板S1对图像B进行腐蚀后的得到图像3B。图3C为模板S4,使用模板S2对图像B进行腐蚀后的得到图像3D。The expansion process of digital image is the same as the image erosion process, and it is also realized with the help of templates. The template is used to represent the shape of the structural elements, and the expansion process can be described as: moving the template in the target area to traverse every pixel of the target image. When the template When it is at the pixel coordinate (x, y), if the pixel corresponding to any position of 1 in the template is 1, then the pixel value at the position corresponding to the coordinate (x, y) after expansion processing is set to 1, otherwise the coordinate (x, y) ) is set to 0 at the corresponding position. Same as image erosion, the size and shape of the templates are different, and the dilated image and the original standard image are also different. Different from erosion, expansion does not necessarily completely contain the original target. If the defined template passes through the template center, that is, the template center is 1, the expanded image is a complete expansion of the original target. If the template is not 1, then The expanded image is not a complete expansion of the original target, but an expansion that corrodes some original pixels. FIG. 3 is the image B before etching, and FIG. 3A is the template S3, and the image 3B is obtained after the image B is etched using the template S1. FIG. 3C is a template S4, which is an image 3D obtained after etching the image B using the template S2.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810039690.XA CN108280841B (en) | 2018-01-16 | 2018-01-16 | Foreground extraction method based on neighborhood pixel intensity correction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810039690.XA CN108280841B (en) | 2018-01-16 | 2018-01-16 | Foreground extraction method based on neighborhood pixel intensity correction |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108280841A true CN108280841A (en) | 2018-07-13 |
CN108280841B CN108280841B (en) | 2022-03-29 |
Family
ID=62803722
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810039690.XA Active CN108280841B (en) | 2018-01-16 | 2018-01-16 | Foreground extraction method based on neighborhood pixel intensity correction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108280841B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110769215A (en) * | 2018-08-21 | 2020-02-07 | 成都极米科技股份有限公司 | Thermal defocus compensation method and projection device |
CN112070786A (en) * | 2020-07-17 | 2020-12-11 | 中国人民解放军63892部队 | Alert radar PPI image target/interference extraction method |
CN112561951A (en) * | 2020-12-24 | 2021-03-26 | 上海富瀚微电子股份有限公司 | Motion and brightness detection method based on frame difference absolute error and SAD |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104616290A (en) * | 2015-01-14 | 2015-05-13 | 合肥工业大学 | Target detection algorithm in combination of statistical matrix model and adaptive threshold |
US20170161905A1 (en) * | 2015-12-07 | 2017-06-08 | Avigilon Analytics Corporation | System and method for background and foreground segmentation |
CN106846359A (en) * | 2017-01-17 | 2017-06-13 | 湖南优象科技有限公司 | Moving target method for quick based on video sequence |
CN106934819A (en) * | 2017-03-10 | 2017-07-07 | 重庆邮电大学 | A kind of method of moving object segmentation precision in raising image |
-
2018
- 2018-01-16 CN CN201810039690.XA patent/CN108280841B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104616290A (en) * | 2015-01-14 | 2015-05-13 | 合肥工业大学 | Target detection algorithm in combination of statistical matrix model and adaptive threshold |
US20170161905A1 (en) * | 2015-12-07 | 2017-06-08 | Avigilon Analytics Corporation | System and method for background and foreground segmentation |
CN106846359A (en) * | 2017-01-17 | 2017-06-13 | 湖南优象科技有限公司 | Moving target method for quick based on video sequence |
CN106934819A (en) * | 2017-03-10 | 2017-07-07 | 重庆邮电大学 | A kind of method of moving object segmentation precision in raising image |
Non-Patent Citations (3)
Title |
---|
刘婕: "复杂场景多特征融合粒子滤波目标跟踪", 《中国优秀硕士学位论文全文数据库 信息科技辑(月刊)》 * |
杨洪臣 等: "视频侦查中背景重构方法研究", 《中国刑警学院学报》 * |
郝志成 等: "基于稳定矩阵的动态图像运动目标检测", 《光学学报》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110769215A (en) * | 2018-08-21 | 2020-02-07 | 成都极米科技股份有限公司 | Thermal defocus compensation method and projection device |
CN110769215B (en) * | 2018-08-21 | 2021-12-03 | 成都极米科技股份有限公司 | Thermal defocus compensation method and projection device |
CN112070786A (en) * | 2020-07-17 | 2020-12-11 | 中国人民解放军63892部队 | Alert radar PPI image target/interference extraction method |
CN112070786B (en) * | 2020-07-17 | 2023-11-24 | 中国人民解放军63892部队 | Method for extracting warning radar PPI image target and interference |
CN112561951A (en) * | 2020-12-24 | 2021-03-26 | 上海富瀚微电子股份有限公司 | Motion and brightness detection method based on frame difference absolute error and SAD |
CN112561951B (en) * | 2020-12-24 | 2024-03-15 | 上海富瀚微电子股份有限公司 | Motion and brightness detection method based on frame difference absolute error and SAD |
Also Published As
Publication number | Publication date |
---|---|
CN108280841B (en) | 2022-03-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110517283B (en) | Gesture tracking method, gesture tracking device and computer readable storage medium | |
Rosin et al. | Image difference threshold strategies and shadow detection. | |
CN112184759A (en) | Moving target detection and tracking method and system based on video | |
CN107169985A (en) | A kind of moving target detecting method based on symmetrical inter-frame difference and context update | |
CN101971190A (en) | Real-time body segmentation system | |
CN108052917A (en) | A kind of method of the architecture against regulations automatic identification found based on new and old Temporal variation | |
CN106204594A (en) | A kind of direction detection method of dispersivity moving object based on video image | |
CN108171201A (en) | Eyelashes rapid detection method based on gray scale morphology | |
CN104537688A (en) | Moving object detecting method based on background subtraction and HOG features | |
CN107767404A (en) | A kind of remote sensing images sequence moving target detection method based on improvement ViBe background models | |
CN108280841B (en) | Foreground extraction method based on neighborhood pixel intensity correction | |
Liu et al. | Smoke-detection framework for high-definition video using fused spatial-and frequency-domain features | |
CN105447489B (en) | A kind of character of picture OCR identifying system and background adhesion noise cancellation method | |
CN105809673A (en) | SURF (Speeded-Up Robust Features) algorithm and maximal similarity region merging based video foreground segmentation method | |
KR102434397B1 (en) | Real time multi-object tracking device and method by using global motion | |
JP7096175B2 (en) | Object extraction method and device | |
CN103337082B (en) | Methods of video segmentation based on Statistical Shape priori | |
CN113516680A (en) | Moving target tracking and detecting method under moving background | |
CN109102520A (en) | The moving target detecting method combined based on fuzzy means clustering with Kalman filter tracking | |
Ng et al. | Object tracking initialization using automatic moving object detection | |
Zhang et al. | Motion detection based on improved Sobel and ViBe algorithm | |
CN107133965A (en) | One kind is based on computer graphic image morphological image segmentation method | |
CN112802055B (en) | Target ghost detection and edge propagation inhibition algorithm | |
Jiang et al. | Background subtraction algorithm based on combination of grabcut and improved ViBe | |
CN118864537B (en) | A method, device and equipment for tracking moving targets in video surveillance |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |