Nothing Special   »   [go: up one dir, main page]

CN108280841A - A kind of foreground extracting method based on neighborhood territory pixel intensity correction - Google Patents

A kind of foreground extracting method based on neighborhood territory pixel intensity correction Download PDF

Info

Publication number
CN108280841A
CN108280841A CN201810039690.XA CN201810039690A CN108280841A CN 108280841 A CN108280841 A CN 108280841A CN 201810039690 A CN201810039690 A CN 201810039690A CN 108280841 A CN108280841 A CN 108280841A
Authority
CN
China
Prior art keywords
background
intensity correction
method based
pixel
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810039690.XA
Other languages
Chinese (zh)
Other versions
CN108280841B (en
Inventor
玄祖兴
郭燕飞
孙欣
王海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Union University
Original Assignee
Beijing Union University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Union University filed Critical Beijing Union University
Priority to CN201810039690.XA priority Critical patent/CN108280841B/en
Publication of CN108280841A publication Critical patent/CN108280841A/en
Application granted granted Critical
Publication of CN108280841B publication Critical patent/CN108280841B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of foreground extracting method based on neighborhood territory pixel intensity correction, including input video, further comprising the steps of:Initial background is obtained according to the first frame of the frame sequence of the input video;The difference of foreground and background is carried out to the initial background;It sets image difference threshold and carries out stable calculation;Using Otsu threshold come to determine optimal threshold, and target is extracted from background image.The present invention proposes a kind of foreground extracting method based on neighborhood territory pixel intensity correction, and using the foreground detection algorithm based on neighborhood territory pixel intensity correction under dynamic scene, robustness is stronger, is suitable for more complex scene change.

Description

Foreground extraction method based on neighborhood pixel intensity correction
Technical Field
The invention relates to the technical field of machine vision and image target detection, in particular to a foreground extraction method based on neighborhood pixel intensity correction.
Background
At present, automatic processing and prediction of surveillance video information is of great interest in many fields of information science, computer vision, machine learning, pattern recognition, and the like. How to effectively and quickly extract foreground object information in the monitoring video is a very important and fundamental problem. The extraction of the moving target in the video sequence means that a moving area is divided from the video sequence, the physical characteristics of the target in the next frame of image are predicted through the estimation of the motion behavior of the target, and the target in the image sequence is associated and matched according to the characteristics to obtain the motion track of the moving target. The method combines knowledge of a plurality of related fields such as computer image processing, video image processing, mode recognition, artificial intelligence, automatic control and the like, and is a key technology of intelligent video monitoring.
In the aspect of moving object extraction, mainly there are a background subtraction method, a frame difference method and an optical flow method. Background subtraction is the most common method. The method directly obtains the foreground moving target by subtracting the current frame and the extracted background, is suitable for the condition that the background is known, and has simple calculation. However, in practical applications, the background is not fixed, and dynamic update of the background model is required.
Stauffer C and Grimson W model each pixel with a Gaussian mixture distribution in Adaptive background mix for real-time tracking and update the model with online estimation. The document Improved adaptive Gaussian mixture model for background subtraction subroutine uses a recursive filter to filter and update the background image, and the stability of the background subtraction method is increased. The document "effective and efficient method for background subtraction" will combine pixel-based and block-based methods to create an effective graded background image, first determining the non-stationary background. The frame difference method is that the current frame and the previous frame are directly subjected to difference, and the position and the shape of a target in an image can be displayed by taking absolute difference between pixels of two adjacent images in time. This method is adaptive to the environment, but the extracted target is prone to form holes and blurred edges. The document 'a new moving object detection method based on interframe difference and background difference' combines a background subtraction method and a frame difference method, detects background pixel points in a frame based on the frame difference method, and then detects a moving object by using the background subtraction method, thereby overcoming the problems of false detection and holes. Document "moving target detection technology in video sequence" adopts a continuous interframe difference method to extract a moving region, and eliminates noise through filtering in a mathematical form, so that the extraction effect of the moving region is improved. The optical flow method is repeated iteration of a difference method between frames, is similar to the basic principle of the frame difference method, but has the error with the increasing nonlinear trend of the speed of a moving object. In addition, the optical flow method is time-consuming in calculation, and cannot meet the requirement of a plurality of image sequence moving target detection and tracking systems on algorithm real-time performance. In order to overcome the defect, a document, "human motion real-time detection method based on optical flow" proposes a method based on a combination of a difference image absolute value and an SAD with an optical flow method, so as to effectively and accurately detect a moving target. The document "analysis of regional optical flow based on inter-frame difference and application thereof" compares the advantages and disadvantages of the frame difference method, the background subtraction method and the optical flow method, and proposes a method for analyzing regional optical flow based on joint inter-frame difference, which improves the processing speed and reduces the calculation cost of the optical flow method.
Disclosure of Invention
In order to solve the technical problems, the invention provides a foreground extraction method based on neighborhood pixel intensity correction, which adopts a foreground detection algorithm based on neighborhood pixel intensity correction in a dynamic scene, has stronger robustness and is suitable for more complex changing scenes.
The invention provides a foreground extraction method based on neighborhood pixel intensity correction, which comprises the following steps of inputting a video:
step 1: acquiring an initial background according to a first frame of the frame sequence of the input video;
step 2: carrying out difference of foreground and background on the initial background;
and step 3: setting an image difference threshold value for stability calculation;
and 4, step 4: the Otsu threshold is used to determine the optimal threshold and to extract the object from the background image.
Preferably, B is calculated in said step 11(x,y)=F1(x, y) wherein B1(x, y) and F1(x, y) are the background pixel value at coordinate point (x, y) and the pixel value of the current frame at that point, respectively, x ≦ P, y ≦ Q, P and Q representing the width and height of the initial background, respectively.
In any of the above schemes, it is preferable that, for the i-th frame, the background image B used for the difference isiCan be formed by Bi-1To obtain a mixture of, among others,
in any of the above schemesPreferably, the step 2 includes a difference D between the current background and the current frameiThe calculation formula is Di=|Fi(x,y)-Bi-1(x, y) |, wherein,
in any of the above schemes, preferably, the step 2 further comprises dividing the image difference DiSeparated from background image and moving object by formulaWhere τ is a self-set threshold.
In any of the above aspects, preferably, the step 3 includes setting S1The initial value of (x, y) is zero, i.e. S1(x, y) is equal to 0, and the stability S (x, y) of each pixel point is calculated by the formula Wherein S isiIndicating that frame i is a stable matrix with an initial value of zero, i.e. S1(x,y)=0。
In any of the above aspects, preferably, the step 2 includes determining whether to perform correction of the pixel value.
In any of the above embodiments, preferably, the determination method is performed when the stability is at a minimum value min(x,y)(Si(x, y)) is smaller than a threshold value delta, then the current background pixel value is corrected; when the minimum value min of the stability is(x,y)(Si(x, y)) is equal to or greater than the threshold δ, the current background pixel value is not corrected.
In any of the above aspects, it is preferable that D is set when belonging to the current moving objecti(x, y) is 1 and there is an unstable value such that SiWhen (x, y) < 0, reconstructing to reduce the calculation amount, wherein the calculation formula is Pi={(x,y)|[Di(x,y)=1]∩[Si(x,y)<0]}。
In any of the above schemes, preferably, the step 3 includes calculating a standard deviation between corresponding pixel points, and the calculation formula isWhere N is the size of a square moving window, N ═ N2Representing the number of pixel points, the mean value μ being calculated from the pixel values of the image I, the formula being
In any of the above schemes, it is preferred for PiIs calculated from the background image and the current frame, two standard deviations are calculatedAndand calculating to obtain background pixel value Bi(x, y) with the formula:
in any of the above solutions, preferably, the size of the square moving window can be changed in motion, including (n)0+n1) Individual pixel point, n0And n1Respectively representing the number of non-moving pixel points and moving pixel points, and the corresponding pixel intensities are respectively g0And g1
In any of the above schemes, preferably, the calculation formula of the pixel mean value μ becomes
In any of the above aspects, preferably, the calculation formula of the standard deviation becomes:
wherein, | g0-g1I and (n)0+n1) Is a constant.
In any of the above schemes, preferably, the step 3 further includes, after the pixel value intensity correction, using the detected background for subsequent foreground extraction, and using the processing result as an input of a subsequent (i + 1) th frame.
In any of the above schemes, preferably, the step 4 further includes obtaining image-based differences from the current frame and the background imageIs calculated by the formula
In any of the above schemes, preferably, the step 4 further comprises finding a reasonable threshold value such that the weighted sum of the variances of the two classes is minimized, and the formula isWherein G is0As a background region, G1As foreground object region, ω G0(g) And ω G1(g) Representing the class probability at the intensity g,andis a variance-like.
In any of the abovePreferably, the step 4 further comprises the step of image-based differenceSeparating out foreground target, using Otsu threshold value method to define optimum threshold value, and making formula be
In any of the above aspects, it is preferable that the image difference D isiBecomes:
wherein,is a gray scale image.
In any of the above schemes, preferably, the step 4 further includes performing basic erosion and expansion treatment on the edge fracture caused by the abrupt change of brightness and illumination.
In any of the above solutions, preferably, the corrosion has the function of eliminating the boundary points of the object, and the principle of the corrosion is as follows:where a is a target area on a plane (x, y), S is a template element specifying a size and a shape, S (x, y) is an area represented by the template element S located on coordinates (x, y), and S (x, y)/a is a difference set representing a set of elements belonging to S but not to a.
In any of the above solutions, it is preferable that the expansion has a function of expanding a boundary point of the object, and the expansion is based on the following principle:wherein A is a target area in the (x, y) image frame, S is a structural element with a specified size, and is defined to be located at the coordinate (x, y)x, y) is S (x, y), and S (x, yv ∩ A is an intersection and represents a set belonging to both S and A.
The foreground extraction method based on neighborhood pixel intensity correction can accurately extract foreground targets, is clear in algorithm thought and simple to implement, can effectively resist scene illumination change, effectively eliminates the influence of camera shake on foreground target extraction, and has strong adaptability to dynamic scenes.
Drawings
FIG. 1 is a flow chart of a foreground extraction method based on neighborhood pixel intensity correction according to a preferred embodiment of the present invention.
FIG. 2 is a pre-erosion original map of a preferred embodiment of a foreground extraction method based on neighborhood pixel intensity correction in accordance with the present invention.
Fig. 2A is a template diagram of S1 according to the embodiment of the present invention, which is based on the foreground extraction method of neighborhood pixel intensity correction, as shown in fig. 2.
Fig. 2B is a diagram of the effect of the foreground extraction method based on the intensity correction of the neighboring pixels after erosion according to the template of S1 in the embodiment of fig. 2.
Fig. 2C is a template diagram of S2 according to the embodiment of the present invention, which is based on the foreground extraction method of neighborhood pixel intensity correction.
Fig. 2D is a diagram of the effect of the foreground extraction method based on the intensity correction of the neighboring pixels after erosion according to the template of S2 in the embodiment of fig. 2 according to the present invention.
FIG. 3 is a pre-dilation raw graph of a foreground extraction method based on neighborhood pixel intensity correction in accordance with a preferred embodiment of the present invention.
Fig. 3A is a template diagram of S3 according to the embodiment of the present invention, which is based on the foreground extraction method of neighborhood pixel intensity correction.
Fig. 3B is a diagram of the expanded effect according to the S3 template of the embodiment shown in fig. 3 of the foreground extraction method based on the intensity correction of the neighboring pixels according to the present invention.
Fig. 3C is a template diagram of S4 according to the embodiment of the present invention, which is based on the foreground extraction method of neighborhood pixel intensity correction.
Fig. 3D is a diagram of the expanded effect according to the S4 template of the embodiment shown in fig. 3 of the foreground extraction method based on the intensity correction of the neighboring pixels according to the present invention.
Detailed Description
The invention is further illustrated with reference to the figures and the specific examples.
Example one
As shown in fig. 1, a step 100 is performed to extract a sequence of frames. The continuous model based on neighborhood pixel intensity correction extracts moving objects from the background foreground by updating the motion information of the current frame. Step 105 is performed, using the first frame of the video input as the initial background, as shown in equation (1).
B1(x,y)=F1(x, y) formula (1)
Wherein B is1(x, y) and F1(x, y) are the background pixel value at coordinate point (x, y) and the pixel value of the current frame at that point, respectively, x ≦ P, y ≦ Q, P and Q representing the width and height of the initial background, respectively. For the ith frameBackground image B for the differenceiCan be formed by Bi-1Thus obtaining the product. First, the difference D between the current background and the current frameiGiven by a grey-scale image calculation, DiAs is derived from equation (2) that,
image difference DiContains moving objects and noise, therefore, DiIt needs to be separated from the background image and the moving object by a threshold τ, as shown in equation (3).
In principle, the moving target has obvious difference with noise and shadow, the noise can be removed by a higher tau value, but some moving pixel points can be misjudged as noise points; on the contrary, if τ is too small, the noise point may be erroneously determined as the motion target point. Thus, τ will directly affect DiIt is necessary to set a suitable value of τ in the experimental part. For moving pixel points DiTo reduce the appearance anomalies as much as possible, we consider the stability S (x, y) of each pixel. S (x, y) is calculated from the number of times the pixel value changes and is updated every frame. If the pixel values of two continuous frames are changed, the pixel point is unstable and the original value should be kept. The stability of each pixel is calculated by formula (4) from consecutive adjacent frames,
wherein S isiIndicating that frame i is a stable matrix with an initial value of zero, i.e. S1(x, y) ═ 0. The stable value of the non-moving target pixel point is larger than that of the moving target pixel point according to the accumulated calculation of the frame number.
In the process of calculationThe correction of the pixel values does not necessarily reduce the efficiency of the calculation if the current background is close to the real background. The specific decision is that min is determined if the minimum value of stability is greater than a given stability threshold δ(x,y)(Si(x, y)) > δ, the process of pixel value correction is not considered anymore. So the current background remains until the next frame. For example, Bi=Bi-1And assigning to the (i + 1) th frame, and then directly entering the stage of foreground object extraction. On the contrary, if min(x,y)(Si(x, y)) < delta, then the current background pixel value is corrected. The specific process can be expressed as follows:
how to correct the pixel values when correction is needed, and both cases exist.
(1) Setting D when belonging to a currently moving objecti(x,y)=1;
(2) There is an unstable value such that Si(x, y) < 0, reconstruction using equation (6) for this case reduces the amount of computation.
Pi{(x,y)|[Di(x,y)]∩[Si(x,y)<0]}, formula (6)
Wherein, PiIs a series of pixels to be filtered. The difference between each pixel point of the background image and the current frame image is compared for correction. And then calculating the standard deviation between corresponding pixel points, wherein if the calculated value is very small, the pixel value is close to the average value. Otherwise, the pixel values are more dispersed. For a square moving window, the standard deviation σ is calculated by equation (7) below.
Wherein n is a square shiftSize of moving window, N ═ N2The number of pixel points is represented, and the mean value μ is calculated from the pixel values of the image I, as shown in equation (8).
For PiNeeds to calculate two standard deviations from the background image and the current frameAndaccording toAndthese two standard deviations calculate the background pixel value Bi(x, y) is represented by formula (9).
The size of the moving window (n x n) may change during the motion. The size of the moving window affects the result of correcting the pixel values. Setting parameters to account for the effect, n0And n1Respectively representing the number of non-moving pixel points and moving pixel points, and the corresponding pixel intensities are respectively g0And g1. Therefore, the whole moving window includes (n)0+n1) And (5) each pixel point. The calculation of the pixel mean value becomes equation (10) from equation (8).
The standard deviation calculation formula becomes equation (11).
Wherein, | g0-g1I and (n)0+n1) Are all constants, so equation (11) consists ofTo decide. Therefore, the standard deviation is determined by the number of the non-moving pixels and the moving pixels, and the adjacent pixel intensity correction model algorithm is influenced. Therefore, modifying the standard deviation parameter can accommodate a variety of background scenes, such as the motion of low, medium, and high speed objects.
After the pixel value intensity correction, the neighborhood pixel intensity correction model algorithm uses the detected background for subsequent foreground extraction, and uses the processing result as the input of the subsequent (i + 1) th frame.
And entering a stage of extracting foreground objects from the detected background. The main idea of foreground extraction is based on an adaptive background difference algorithm. The difference is obtained from the current frame and the background image, and the following formula (12) is obtained by equation (2):
image-based disparityAnd separating out the foreground target, and determining an optimal threshold value by using an Otsu threshold value method so as to extract the target from the background image. G0As a background region, G1A foreground target area; by minimizing the intra-class variance, the threshold may reduce classification errors. A reasonable threshold is found such that the weighted sum of the variances of the two classes is minimized, as shown in equation (13).
ωG0(g) And ω G1(g) Representing the class probability at the intensity g,andis a variance-like. According to the document [13 ]]The threshold value is defined as equation (14) as follows.
Equation (3) becomes equation (15) as shown below.
Produced byIs a gray scale image. The edge fracture caused by the rapid change of brightness and illumination can be caused in the foreground, and the basic corrosion and expansion treatment is carried out on the edge fracture.
The role of erosion in the mathematical morphology operation is to eliminate the boundary points of the object. In the image processing, for points smaller than the structural elements, it is possible to eliminate by erosion operation, that is, it is possible to divide the region containing the fine connected portions in the target region by erosion processing.
The corrosion principle is as follows:
where a is a target area on a plane (x, y), S is a template element specifying a size and a shape, S (x, y) is an area represented by the template element S located on coordinates (x, y), and S (x, y)/a is a difference set representing a set of elements belonging to S but not to a.
In the image processing, it can be understood that a template S is defined, each pixel point is scanned in the image from left to right and from top to bottom, when a certain pixel point is located, the template is completely located in the target pixel, the current pixel point is retained, otherwise, the current point is deleted.
Dilation of images the effect of dilation in morphological operations is to expand the boundary points of objects. Some adjacent and close regions can be connected through a specific structural element through a dilation operation, however, the dilation can also enlarge some small outliers and sensitive points, and the principle of dilation is as follows:
the method comprises the steps that A is a target area in an image frame (x, y), S is a structural element with a specified size, the area represented by the structural element S located on coordinates (x, y) is defined to be S (x, y), S (x, y) ∩ A is an intersection and represents that the structural element S belongs to a set of S and A at the same time, starting from the upper left corner of the image, according to a scanning order, when the structural element is located on a certain pixel point, the structural element is intersected with the target, the pixel point is reserved, and otherwise, the pixel point is deleted.
Example two
The corrosion is achieved as follows: the template S is a 3 x 3 sized cross-shaped structural element. The corroded target area A (x, y) is an 8 x 8 binarized image, in the process of scanning and corroding pixel points one by one, if the currently scanned pixel point A (x, y) meets the condition that the pixel value is 1 (assuming that 1 represents the pixel value of a black area in a picture), the values of the pixel points which are directly adjacent in the eight neighborhoods from top to bottom and from left to right are 1, the rest values are O pixel points, the pixel points are considered to be matched with the template S, the value of A (x, y) is set to be 1, otherwise, the pixel points are set to be 0, if the eight neighborhoods of A (4, 4) are the first pixel points which are completely matched with the value of the template S, and the value of A (4, 4) is 1 after corrosion operation. The results of the erosion operations performed on the images using different templates are different, fig. 2 is image a before erosion, fig. 2A is template S1, and image 2B is obtained after the image a is eroded using template S1. Fig. 2C shows a template S2, and an image 2D is obtained by etching the image a using the template S2.
EXAMPLE III
The expansion processing of the digital image is realized by the same template as the image erosion processing, the shape of the structural element is expressed by the template, the expansion process can be described as moving the template in the target area to traverse each pixel of the target image, when the template is positioned at the pixel coordinate (x, y), if the pixel corresponding to the position of any 1 in the template is 1, the pixel value of the position corresponding to the coordinate (x, y) after the expansion processing is set to be 1, otherwise, the pixel corresponding to the coordinate (x, y) is set to be 0. Similarly, as in the case of image erosion, the size of the template is different, and the expanded image obtained by the different shapes is different from the original image. Different from corrosion, expansion does not necessarily completely contain the original target, if a defined template passes through the center of the template, namely the center of the template is 1, the expanded image is the complete expansion of the original target, and if the template is not 1, the expanded image is not the complete expansion of the original target but the expansion of some original pixel points is corroded. Fig. 3 shows an image B before etching, fig. 3A shows a template S3, and an image 3B is obtained by etching the image B using the template S1. Fig. 3C shows a template S4, and an image 3D is obtained by etching the image B using the template S2.

Claims (10)

1. A foreground extraction method based on neighborhood pixel intensity correction comprises an input video and is characterized by further comprising the following steps:
step 1: acquiring an initial background according to a first frame of the frame sequence of the input video;
step 2: carrying out difference of foreground and background on the initial background;
and step 3: setting an image difference threshold value for stability calculation;
and 4, step 4: the Otsu threshold is used to determine the optimal threshold and to extract the object from the background image.
2. The foreground extraction method based on neighborhood pixel intensity correction as claimed in claim 1, wherein: calculating B in said step 11(x,y)=F1(x, y) wherein B1(x, y) and F1(x, y) are the background pixel value at coordinate point (x, y) and the pixel value of the current frame at that point, respectively, x ≦ P, y ≦ Q, P and Q representing the width and height of the initial background, respectively.
3. The foreground extraction method based on neighborhood pixel intensity correction as claimed in claim 2, wherein: for the ith frame, background image B for the differenceiCan be formed by Bi-1To obtain a mixture of, among others,
4. the foreground extraction method based on neighborhood pixel intensity correction of claim 3, wherein: said step 2 comprises a difference D between the current background and the current frameiThe calculation formula is Di=|Fi(x,y)-Bi-1(x, y) |, wherein,
5. the foreground extraction method based on neighborhood pixel intensity correction of claim 4, wherein: the step 2 further comprises differentiating the image by DiSeparated from background image and moving object by formulaWhere τ is a self-set threshold.
6. As in claimThe foreground extraction method based on neighborhood pixel intensity correction of claim 5, characterized by: said step 3 comprising setting S1The initial value of (x, y) is zero, i.e. S1(x, y) is equal to 0, and the stability S (x, y) of each pixel point is calculated by the formulaWherein S isiIndicating that frame i is a stable matrix.
7. The foreground extraction method based on neighborhood pixel intensity correction of claim 6, wherein: the step 2 includes determining whether or not to perform correction of the pixel value.
8. The foreground extraction method based on neighborhood pixel intensity correction of claim 7, wherein: the judgment method is that when the minimum value min of the stability is adopted(x,y)(Si(x, y)) is smaller than a threshold value delta, then the current background pixel value is corrected; when the minimum value min of the stability is(x,y)(Si(x, y)) is equal to or greater than the threshold δ, the current background pixel value is not corrected.
9. The foreground extraction method based on neighborhood pixel intensity correction of claim 8, wherein: setting D when belonging to a currently moving objecti(x, y) is 1 and there is an unstable value such that SiWhen (x, y) < 0, reconstructing to reduce the calculation amount, wherein the calculation formula is Pi={(x,y)|[Di(x,y)=1]∩[Si(x,y)<0]}。
10. The foreground extraction method based on neighborhood pixel intensity correction of claim 9, wherein: . The step 3 comprises calculating the standard deviation between corresponding pixel points by the formula Where N is the size of a square moving window, N ═ N2Representing the number of pixel points, the mean value μ being calculated from the pixel values of the image I, the formula being
CN201810039690.XA 2018-01-16 2018-01-16 Foreground extraction method based on neighborhood pixel intensity correction Active CN108280841B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810039690.XA CN108280841B (en) 2018-01-16 2018-01-16 Foreground extraction method based on neighborhood pixel intensity correction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810039690.XA CN108280841B (en) 2018-01-16 2018-01-16 Foreground extraction method based on neighborhood pixel intensity correction

Publications (2)

Publication Number Publication Date
CN108280841A true CN108280841A (en) 2018-07-13
CN108280841B CN108280841B (en) 2022-03-29

Family

ID=62803722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810039690.XA Active CN108280841B (en) 2018-01-16 2018-01-16 Foreground extraction method based on neighborhood pixel intensity correction

Country Status (1)

Country Link
CN (1) CN108280841B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110769215A (en) * 2018-08-21 2020-02-07 成都极米科技股份有限公司 Thermal defocus compensation method and projection device
CN112070786A (en) * 2020-07-17 2020-12-11 中国人民解放军63892部队 Alert radar PPI image target/interference extraction method
CN112561951A (en) * 2020-12-24 2021-03-26 上海富瀚微电子股份有限公司 Motion and brightness detection method based on frame difference absolute error and SAD

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104616290A (en) * 2015-01-14 2015-05-13 合肥工业大学 Target detection algorithm in combination of statistical matrix model and adaptive threshold
US20170161905A1 (en) * 2015-12-07 2017-06-08 Avigilon Analytics Corporation System and method for background and foreground segmentation
CN106846359A (en) * 2017-01-17 2017-06-13 湖南优象科技有限公司 Moving target method for quick based on video sequence
CN106934819A (en) * 2017-03-10 2017-07-07 重庆邮电大学 A kind of method of moving object segmentation precision in raising image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104616290A (en) * 2015-01-14 2015-05-13 合肥工业大学 Target detection algorithm in combination of statistical matrix model and adaptive threshold
US20170161905A1 (en) * 2015-12-07 2017-06-08 Avigilon Analytics Corporation System and method for background and foreground segmentation
CN106846359A (en) * 2017-01-17 2017-06-13 湖南优象科技有限公司 Moving target method for quick based on video sequence
CN106934819A (en) * 2017-03-10 2017-07-07 重庆邮电大学 A kind of method of moving object segmentation precision in raising image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘婕: "复杂场景多特征融合粒子滤波目标跟踪", 《中国优秀硕士学位论文全文数据库 信息科技辑(月刊)》 *
杨洪臣 等: "视频侦查中背景重构方法研究", 《中国刑警学院学报》 *
郝志成 等: "基于稳定矩阵的动态图像运动目标检测", 《光学学报》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110769215A (en) * 2018-08-21 2020-02-07 成都极米科技股份有限公司 Thermal defocus compensation method and projection device
CN110769215B (en) * 2018-08-21 2021-12-03 成都极米科技股份有限公司 Thermal defocus compensation method and projection device
CN112070786A (en) * 2020-07-17 2020-12-11 中国人民解放军63892部队 Alert radar PPI image target/interference extraction method
CN112070786B (en) * 2020-07-17 2023-11-24 中国人民解放军63892部队 Method for extracting warning radar PPI image target and interference
CN112561951A (en) * 2020-12-24 2021-03-26 上海富瀚微电子股份有限公司 Motion and brightness detection method based on frame difference absolute error and SAD
CN112561951B (en) * 2020-12-24 2024-03-15 上海富瀚微电子股份有限公司 Motion and brightness detection method based on frame difference absolute error and SAD

Also Published As

Publication number Publication date
CN108280841B (en) 2022-03-29

Similar Documents

Publication Publication Date Title
CN109961049B (en) Cigarette brand identification method under complex scene
Eveland et al. Background modeling for segmentation of video-rate stereo sequences
EP1859410B1 (en) Method of tracking objects in a video sequence
CN107016691B (en) Moving target detecting method based on super-pixel feature
EP1859411B1 (en) Tracking objects in a video sequence
CN111681249B (en) Grabcut-based improved segmentation algorithm research of sand particles
CN111886600B (en) Apparatus and method for instance level segmentation of images
CN110517283A (en) Attitude Tracking method, apparatus and computer readable storage medium
CN109685045B (en) Moving target video tracking method and system
CN112184759A (en) Moving target detection and tracking method and system based on video
JP2006209755A (en) Method for tracing moving object inside frame sequence acquired from scene
CN108280841B (en) Foreground extraction method based on neighborhood pixel intensity correction
CN108876820B (en) Moving target tracking method under shielding condition based on mean shift
CN105869174B (en) A kind of Sky Scene image partition method
Ng et al. Object tracking initialization using automatic moving object detection
CN114743152A (en) Automatic extraction method and system for video key frames of blast furnace burden surface
JP7096175B2 (en) Object extraction method and device
CN112115878A (en) Forest fire smoke root node detection method based on smoke area density
KR20120130462A (en) Method for tracking object using feature points of object
CN113516680A (en) Moving target tracking and detecting method under moving background
CN107832732B (en) Lane line detection method based on treble traversal
CN109102520A (en) The moving target detecting method combined based on fuzzy means clustering with Kalman filter tracking
CN109145875B (en) Method and device for removing black frame glasses in face image
Zhang et al. Motion detection based on improved Sobel and ViBe algorithm
CN111079516A (en) Pedestrian gait segmentation method based on deep neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant