CN100446544C - Method for extraction method of video object external boundary - Google Patents
Method for extraction method of video object external boundary Download PDFInfo
- Publication number
- CN100446544C CN100446544C CNB2005100215413A CN200510021541A CN100446544C CN 100446544 C CN100446544 C CN 100446544C CN B2005100215413 A CNB2005100215413 A CN B2005100215413A CN 200510021541 A CN200510021541 A CN 200510021541A CN 100446544 C CN100446544 C CN 100446544C
- Authority
- CN
- China
- Prior art keywords
- time domain
- variation
- image
- region
- seed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention relates to a method for extracting outer edge of video object. Wherein, it uses two frames of gray images after globe motion compensation to process the seed area grow method of gauss noise model to generate frame different image; then extracts time domain change area to calculate the motion vector to differ the motion model area and ineffective area; then extracts motion object to modify, detect the maximum point at outer edge of idle area, connecting the canny edge of said maximum point, etc to obtain the outer edge of motion object. The invention can obtain accurate outer edge, while its calculation speed can apply real-time system.
Description
Technical field
The invention belongs to technical field of image processing, particularly video object split image treatment technology.
Background technology
In order to support content-based interactivity, promptly to support content is carried out coding and decoding independently, MPEG-4 video testing model has been introduced the notion of video object plane (VOP:Video Object Plane).Video object segmentation is meant that an image sequence or video are divided into the zone by certain standard, and purpose is in order to isolate the entity of certain semanteme from video.This have semantic entity to be called object video in digital video.The external boundary of object video is exactly the profile of object video outermost.So the information that extracts the external boundary of object video just can obtain the region characteristic in image of whole video object, thereby realize cutting apart of object video.For individual picture just had image information (being the spatial information (si) of image) based on the space on its coordinate, video image also had the time-domain information of every two field picture with respect to the temporal correlation between the frame of front and back.So video image has the information of spatial domain and time domain twocouese.
Video object segmentation is based on the prerequisite that the MEPG4 Video processing is calculated, have a wide range of applications at many civil areas such as computer vision, traffic monitoring, visual early warning, machine navigation, also bringing into play important effect in military domain such as target range television measurement, TV measurement, the guidances of aircraft TV simultaneously.It is a core operation facing object video coding, multimedia descriptions and intelligent signal processing that video is cut apart.But effectively cut apart is in graphical analysis one the very task and the challenge of difficulty.(referring to document Haritaoglu, I, Harwood, D., Davis, L.S. " W4:real-time surveillance of people and theiractivities ", Pattern Analysis and Machine Intelligence, IEEE Transactions on, Volume:22, Issue:8, Aug.2000Pages:809-830.)
The information of using from present researcher with and method, video object segmentation can be divided into three major types: (1) time domain is cut apart; (2) spatial domain is cut apart with time domain and is followed the tracks of; (3) space-time is in conjunction with cutting apart.(referring to document Fuhui Long, Dagan Feng, Hanchuan Peng, Wan-Chi Siu, " Extracting semantic video objects ", Computer Graphics andApplications, IEEE Vol.21; Jan 2001; Pages 48-55)
First method only is to use time-domain information, but because present all these class methods can not solve the big and inaccurate problem in location of amount of calculation, so these class methods can not get good result.Second method is to carry out image segmentation earlier in the spatial domain, then the image-region that splits is carried out time domain and follows the tracks of.But when facing the image sequence of current most of complex backgrounds, these class methods all are usually and are time-consuming again less than the result who expects.The third method has now obtained great popularization owing to it makes full use of ambilateral information, and the method that the present invention proposes just belongs to this scheme system.But because the amount of information that exists is huge in this method, and time domain and spatial information (si) are generally inconsistent simultaneously, that is to say that the result in time domain that obtains does not conform to spatial information (si) in this two field picture.So the employed method of current most of researcher can not well merge spatial domain and time-domain information, thereby promptly causes amount of calculation excessive, can not finely obtain the complete information of object video.
Summary of the invention
The purpose of this invention is to provide a kind of method for extraction method of video object external boundary that utilizes time domain and spatial information (si), it has, and anti-noise is strong, the spatial domain time-domain information merges rationally quick, fast operation, strong robustness etc. characteristics.
The present invention ground content for convenience of description, at first make a term definition:
1. global motion compensation: global motion is exactly in the recording process of video flowing, thereby video camera is unavoidably made convergent-divergent, horizontal movement, vertical motion and the total movement of the entire image that causes such as rotatablely move.Global motion compensation be exactly the amount of exercise that calculates by the global motion method to the compensation of a two field picture with respect to former frame, make two two field pictures get rid of the influence that camera motion is made.(concrete grammar is referring to " Digital Video Processing " Cui's withered translating)
2. seed region growth method: it is that the set of pixels that will have similar quality constitutes the zone altogether that seed region increases.Be specially earlier and look for a pixel as the seed point that increases, plant in the zone at pixel place having the pixel (judging) of same or similar character to merge to sub pixel in the neighborhood around the sub pixel then according to pre-determined growth or similarity criterion to each zone that need cut apart.These new pixels are used as new sub pixel continue top process, included up to the pixel that does not satisfy condition, such zone has just grown up to.
The region growing method need solve three problems:
A, selection or the definite one group of sub pixel that can correctly represent desired zone;
B, definite criterion that increases;
The condition that C, formulation allow propagation process stop.(concrete grammar must be write referring to " machine vision " Jia Yun)
3. background occlusion area: the background that hides of passive movement object not in former frame, and along with the motion background area that the passive movement object hides in next frame of moving object.(referring to " Digital Video Processing " Cui's withered translating)
4. background appears the zone: the passive movement object hides in former frame, and the background area that in next frame, occurs along with the motion of moving object.(referring to " Digital Video Processing " Cui's withered translating)
5. Gaussian noise: its statistical property is the noise of Gaussian Profile characteristic.Regard Gaussian noise as for the white noise that disturbs in the general signal processing, and then in analysis, count its average and method is convenient to handle.(referring to " Digital Image Processing " Paul Gonzales)
6. binaryzation: use 0,1 binary feature of representing whole zone.(referring to " Digital Image Processing " Paul Gonzales)
7. morphologic filtering: use the method for mathematical morphology to carry out filtering.Usually use based on morphology dilation operation and erosion operation, be combined as the noise that different binary images is removed in opening operation and closed operation respectively.(concrete grammar is write referring to " image processing and analysis: Mathematical Morphology Method and application " Cui Yi)
8. structural elements: the image collection that is morphologic other image of processing of mathematics.(concrete grammar is write referring to " image processing and analysis: Mathematical Morphology Method and application " Cui Yi)
9. morphology dilation operation: be the most basic morphologic operator, its meaning is:
Wherein A and B are n dimension Euclidean space E
nTwo subclass.Wherein B is generally morphologic structural elements.(concrete grammar is write referring to " image processing and analysis: Mathematical Morphology Method and application " Cui Yi)
10. morphological erosion computing: be the most basic morphologic operator, its meaning is:
Wherein A and B are n dimension Euclidean space E
nTwo subclass.Wherein B is generally morphologic structural elements.(concrete grammar is write referring to " image processing and analysis: Mathematical Morphology Method and application " Cui Yi)
11. be communicated with: known pixel p, q ∈ S, if there is a path from p to q, then the whole pixels on the path are included among the S, then claim p to be communicated with q.(referring to " Digital Image Processing " Paul Gonzales)
12.canny border: use the canny principle to obtain the binary image of image boundary.(referring to " Digital Image Processing " Paul Gonzales)
13. mask: i.e. the zone of a binaryzation is 1 expression target, is 0 the uncorrelated zone that is.Use mask on piece image, keep corresponding mask image exactly and be the value of this image of 1, mask is that 0 place changes 0 into.(referring to " Digital Image Processing " Paul Gonzales)
14. regional connectivity scaling method: be exactly the image of whole binaryzation is communicated with demarcation, for the zone of each connection is provided with a sign.(concrete grammar must be write referring to " machine vision " Jia Yun)
15. phase correlation method: utilize corresponding Fourier to change the phase relation of asking, and the method for the light stream motion vector of asking.(concrete grammar is referring to " Digital Video Processing " Cui's withered translating)
16 outline boundary extraction method: according to clockwise direction, the external boundary point of outermost profile is extracted in pointwise.(concrete grammar must be write referring to " machine vision " Jia Yun)
17. compensation: pixel is according to the displacement of compensation definition, and the gray value of pixel that moves to the next frame image with current frame image is constant, and then this displacement is called motion compensation.Pixel is called compensation according to this displacement mobile.(concrete grammar is referring to " Digital Video Processing " Cui's withered translating)
18. coupling: the similarity of representing two pixels by the gray value absolute difference of two picture elements calculating.
19. spatial domain gradient: the difference of two squares of the locational gray value up and down of image pixel point.
A kind of method for extraction method of video object external boundary provided by the invention, it comprises the following step (overall flow is referring to shown in the accompanying drawing 1):
Step 1, to having carried out the two frame gray level images in succession of global motion compensation in the video flowing, use with Gaussian noise model and carry out the processing of frame difference as the seed region growth method of condition, obtain the binaryzation time domain frame difference image of anti-noise, its intermediate value is that 1 zone is the time domain region of variation;
Concrete grammar is: at first calculate the absolute difference on the two frame gray level images in succession that carried out global motion compensation, then greater than 40 definitely almost be set to 0, the average (representing) and the variance (representing) of statistics residue absolute difference with A with M, the average M of the absolute difference image that obtains according to the top and the threshold value that variance A is provided with a M+4*A are the seed condition then, it is the seed condition of growth that [M+A, M+4*A] interval range is set.Point by point search on the absolute difference image of the two frame gray level images in succession that carried out global motion compensation, absolute difference is set to the seed picture element greater than all picture elements of seed threshold value.Search at seed pixel neighborhood of a point then, the picture element that satisfies the seed condition of growth is set to the seed picture element.Searched for after all absolute difference images, obtained the zone of seed growth, should the zone on all picture elements be set to 1, all picture elements beyond this zone are set to 0, then obtain binaryzation time domain frame difference image.
Step 2, the binaryzation time domain frame difference image to utilizing the seed region growth method to generate in the step 1 carry out morphologic filtering and handle, and obtain the time domain region of variation image after the denoising;
Step 3, step 2 is obtained time domain region of variation image after the denoising, use the regional connectivity scaling method, obtain having the time domain region of variation image of sign;
Step 4, take out each sign on the region of variation image that step 3 obtains one by one with sign, the time domain region of variation image that has sign then by picture element scanning, if the sign of this picture element is the same with the sign of taking-up, then this picture element is set to 1, if different then be 0, there is not the picture element of sign to be set to 0.Form the single region of variation binary image of time domain of sign after the scanning, the single region of variation binary image of time domain of this sign of generation that each sign all will be one by one;
Step 5, the single region of variation binary image of time domain that obtains each sign with step 4 is a mask image, to the scanning of two frame gray level images in succession of carrying out global motion compensation in the video flowing in the step 1, the gray value that keeps on the mask image correspondence position that is 1 picture element on two frame gray level images in succession, be provided with on the mask image and be 0 picture element on two frame gray level images in succession the gray value of correspondence position is 0, obtain each after the been scanned and indicate pairing two single region of variation gray level images of time domain that only keep corresponding time domain region of variation gray value;
Step 6, calculation procedure 4 obtain the coordinate figure on four summits that the single region of variation binary image of time domain intermediate value is the maximum bounding rectangle in 1 zone;
Step 7, according to the coordinate figure on four summits of the maximum bounding rectangle of the single region of variation binary image of time domain of each sign that uses step 6 to obtain, the single region of variation gray level image of two time domains of each sign that obtains from step 5, extract the internal image of maximum bounding rectangle, form the time domain region of variation gray level image that each indicates pairing two parts;
Step 8, step 7 obtained the time domain region of variation gray level image of two corresponding parts of each sign, use phase correlation method, calculate the relative motion displacement of actual motion object on the time domain region of variation gray level image of the part of two same signs of correspondence that produces time domain region of variation local in the step 7;
The time domain region of variation gray level image of two parts of step 9, each sign that step 7 is obtained, the relative motion displacement of moving object on the time domain region of variation gray level image of two parts of this region of variation of generation that obtains with step 8 is as the pixel matching condition, use the seed region growth method, remove background occlusion area and background and appear the zone, generate time domain single movement subject area.
Concrete grammar: the relative motion displacement of moving object on the time domain region of variation gray level image of two parts of the time domain region of variation that each sign of generation that obtains according to step 8 is corresponding, the seed condition is set is: if the corresponding picture element on the time domain region of variation gray level image of two parts of sign differs absolute value less than 2 according to the relative motion displacement gray scale amplitude afterwards that compensates, and the gray scale amplitude less than not motion compensation differs absolute value, and then this picture element is the seed picture element.The seed condition of growth is set is: if the gray scale amplitude of the corresponding picture element on the time domain region of variation gray level image of two parts of sign after compensating according to the relative motion displacement differ absolute value greater than 2 less than 5, and the gray scale amplitude less than not motion compensation differs absolute value, and then this picture element is that seed increases picture element.Scanning step 7 obtains the picture element on the time domain region of variation gray level image of two parts of corresponding sign one by one, and the picture element that satisfies the seed condition is set to the seed picture element.Search at seed pixel neighborhood of a point then, the point that satisfies the seed condition of growth is set to the seed picture element.Searched for after the pixel of all local time domain region of variation gray level images, obtain the time domain single movement subject area of seed growth, be provided with that all pixels are set to 1 on this zone, all picture elements beyond this zone are set to 0, then obtain binaryzation time domain single movement subject area.
Step 10, to obtaining binaryzation time domain single movement subject area in the step 9, use method for repairing and mending, obtain complete time domain single movement subject area; (above is time domain Boundary Extraction handling process, shown in accompanying drawing 2)
Concrete method for repairing and mending: binaryzation time domain single movement subject area is carried out row and column scanning, not that object picture element between the picture element of motion object and two the motion object pixels that are positioned at colleague's (perhaps same column) again is set to motion object picture element, other picture element is constant, obtains complete time domain single movement subject area
Step 11, the complete time domain single movement subject area to generating in the step 10 are used the outline boundary extraction method, obtain motion object external boundary profile point;
Step 12, to the motion object external boundary profile point that step 11 obtains, pointwise substitutes with the airspace boundary maximum point, field object airspace boundary maximum point image when obtaining;
Concrete grammar: 8 neighborhoods that calculate time domain boundary profile point carry out the spatial domain gradient, spatial domain gradient with the current time domain boundary point that contains weights W (weights W is chosen as a numeral greater than 1 and strengthens result in time domain) compares then, the point that selection has a maximum spatial domain gradient as the time field object airspace boundary maximum point, field object airspace boundary maximum point image when obtaining;
The time domain region of variation gray level image of two parts of step 13, sign correspondence that step 7 is obtained uses the canny boundary extraction method to extract the border, obtains the spatial domain canny boundary image of local time domain region of variation;
Step 14, in step 13 on the spatial domain canny boundary image of local time domain region of variation, the time field object airspace boundary maximum point image according to step 12 obtains carries out attended operation, obtains spatial domain time domain fusional movement object external boundary.(above refine and be connected for airspace boundary, shown in accompanying drawing 2)
Concrete attended operation is: the canny border, spatial domain that keeps the local time domain region of variation that contains the airspace boundary maximum point, remove the canny border, spatial domain of the local time domain region of variation that does not contain the airspace boundary maximum point, form spatial domain time domain fusional movement object external boundary, i.e. the object video external boundary.
By above step, we just can obtain the external boundary of object video.
Need to prove:
(1) utilization of the present invention be in the video flowing in succession in two video images the motion object certain overlapping situation is arranged, also used simultaneously phase correlation method to calculate the light stream vector of motion object, so this invention is applicable to that the normal and motion object of motion object velocity is the situation of non-changing object.
(2) Gaussian noise model of using in the step 1 in the present invention is to form according to the theoretical foundation that frame difference image is had Gaussian noise, and this is according to being noise criteria commonly used in the digital picture.So use the seed region growth method of this method can well remove noise, obtain effective motion change zone.
(3) exist three kinds of zones in the region of variation after obtaining in the step 4 indicating: moving object, the background that is capped, the background that manifests.Used in step 5 in succession for this three kinds of zones that corresponding region of variation calculates corresponding displacement in two frames, and then three kinds of zones are distinguished.Its principle is: because most ofly in the region of variation be moving object, it has same motion vector, so have the relation of formula 1, (d wherein
1, d
2) be the displacement vector of moving object, f
1(x, y), f
2(x+d
1, y+d
2) the corresponding image block of expression in two two field pictures,
Be f
1(x, y), f
2(x+d
1, y+d
2) Fourier change,
Be the frequency domain factor.Like this
Fourier change the relation just there is formula 2, have the relation of formula 3 between their phase places, therefore can obtain one about (d
1, d
2) pulse, thereby calculate (d
1, d
2), wherein
With
Be respectively
Phase place, j is a complex unit.
f
1(x, y)=f
2(x+d
1, y+d
2) (formula 1)
(formula 3)
(4) extract in the step 11 the time field object external boundary because displacement calculating and frame difference are handled the uncertainty of introducing, can have the pinpoint result that can not obtain object boundary, obtain accurate object external boundary so the present invention handles by the spatial domain at this.
Essence of the present invention: it is by carrying out the seed region growth method delta frame difference image of Gaussian noise model to two two field pictures in succession after the global motion compensation.Use phase correlation method that two two field pictures of the reality in the respective change zone are carried out computing then, obtain motion vector.Use this motion vector to distinguish motion model zone and failed areas, in the moving region, use the seed region growth method to extract the motion object then according to motion vector.The motion subject area is repaired after the processing, detect the spatial domain external boundary then.Time domain analysis is this moment finished, and uses spatial information (si) to refine then.Earlier to the time field object external boundary point substitute with spatial domain gradient maximum point, connect the canny border at maximum point place, spatial domain at last.Thereby obtain motion object external boundary.
Method of the present invention has following three features: one generates the frame difference image of very strong anti-noise based on the seed region growth method of Gaussian noise model, therefore is adapted to various complex background situations; Two use fast, and phase correlation method obtains time domain motion object, can form the destination object zone with single semanteme; Three only use the airspace boundary information of time domain target edges to come the correction motion object.So it is outstanding that holistic approach has the anti-noise method, the spatial domain time-domain information merges rationally to be simplified, and makes this invention have fast operation like this, the characteristic of strong robustness.
Innovation part of the present invention is:
1. generate the frame difference image of very strong anti-noise based on the seed region growth method of Gaussian noise model.Because its denoising is rationally theoretical, therefore is adapted to various complex background situations, thereby has improved the robustness of this method and greatly reduced the operand of this method.
2. use fast that phase correlation method obtains time domain motion object, can form destination object zone, so when implementing fast, reach the purpose of accuracy with single semanteme.
3. the time domain object bounds that obtains is carried out airspace boundary maximum and replace, connect the canny border at last, thereby, accurately locate boundary information making full use of on the basis of time-domain information.
Adopt method for extraction method of video object external boundary of the present invention, make full use of image sequence sky, the time information.Not only can obtain accurate external boundary locating information, and holistic approach has very high robustness, its arithmetic speed goes for real-time system simultaneously, has very strong actual application prospect.
Description of drawings
Fig. 1 schematic flow sheet of the present invention
Time domain Boundary Extraction handling process schematic diagram among Fig. 2 the present invention
Airspace boundary is refined and the connection processing schematic flow sheet among Fig. 3 the present invention
Embodiment:
To provide a concrete realization example of the present invention, what this realization example adopted is the extraction of vehicle under complex background below,
Step 1 pair was carried out two two field pictures in succession in the video flowing behind the global motion compensation, wherein present frame is f (x, y, k) and former frame be f (x, y, k-1) (the row-coordinate in the formula in the x representative image matrix, row coordinate in the y representative image matrix, k represents the time relative position of this frame in whole video stream, k-1 represents the time relative position of former frame in whole video stream), remove the value greater than threshold value 40 of directly subtracting each other in the absolute difference, then remaining absolute value is added up, calculate this directly poor average M and variance A;
Step 2 is the seed point with the pixel greater than M+4 * A, is condition of growth with [M+A, M+4 * A], uses the seed region growth method to coming | f (x, y, k)-(k-1) | result handles f for x, y, obtain the binary image of the time domain region of variation of anti-noise, (x, y k) represent this image to use h;
(x, y k) use morphology ON operation and closed operation to carry out filtering to step 3 couple h, obtain the time domain region of variation image after the denoising, use (x, y, k) expression with o;
Step 4 couple o (x, y k) are communicated with demarcation, obtain having the time domain region of variation image of sign, and use l (x, y, k) expression, concrete grammar is: scan image, find 1 point that does not have mark, distribute a new label L to it; Recurrence distribute labels L gives 1 adjoint point; Then do not stop after not having the point that does not have mark; Return initial, scan image once more, thus up to 1 all have mark till.So just can obtain having the time domain region of variation image of sign;
Step 5 couple l (x, y, k) in the time domain region of variation image of each sign, f (x, y, k) and f (x, y, k-1) extract the single region of variation binary image of corresponding time domain in, use ch (x, y, k) and ch (x, y, k-1) expression, this moment, each time domain that extracts single region of variation binary picture similarly was the corresponding caused variation of displacement that is a single movement object on two two field pictures in succession.
Step 6 with ch (x, y, k) and ch (x, y, k-1) be mask image, keep mask image and be 1 correspondence f (x, y, k) and f (x, y, k-1) locational gray value is that 0 correspondence position is at f (x, y, k) and f (x, y k-1) also are set to 0, obtain each and indicate pairing two single region of variation gray level images of time domain that only keep corresponding time domain region of variation gray value, with gch (x, y, k) and gch (x, y k-1) represent;
Step 7 pair corresponding to ch (x, y, l k) (x, y, k) in the single region of variation binary image of time domain of each sign, calculate the coordinate figure on four summits of its maximum bounding rectangle.Use (x
0, y
0) (x
1, y
1) (x
2, y
2) (x
3, y
3) expression.
Step 8 is according to (x
0, y
0) (x
1, y
1) (x
2, y
2) (x
3, y
3), from ch (x, y, k) and ch (x, y, k-1) in, extract the time domain region of variation gray level image that each indicates pairing two parts; Use och (x, y, k) and och (x, y k-1) represent.
Step 9 couple och (x, y, k) and och (x, y is k-1) as the f in (formula 1)
1(x, y), f
2(x+d
1, y+d
2), obtain motion vector (d according to (formula 2) and (formula 3)
1, d
2);
Step 10 couple och (x, y, k) and och (x, y, k-1), with
Och (x, y, k)-och (x+d
1, y+d
2, k-1)<och (x, y, k)-och (x, y, k-1) and
Och (x, y, k)-och (x+d
1, y+d
2, k-1)<Th1 (Th1 is set to 2) is the seed condition, with
Och (x, y, k)-och (x+d
1, y+d
2, k-1)<och (x, y, k)-och (x, y, k-1) and
Th1<och (x, y, k)-och (x+d
1, y+d
2, k-1)<Th2 (Th2 is set to 5) increases for condition of growth carries out seed region, obtains the motion subject area of time domain.Concrete growing method is: scan one by one och (x, y, k) and och (x, y, the k-1) picture element on, the picture element that satisfies the seed condition is set to the seed picture element; Search at seed pixel neighborhood of a point then, the point that satisfies the seed condition of growth is set to the seed picture element; Searched for after the pixel of all local time domain region of variation gray level images, obtain the time domain single movement subject area of seed growth, be provided with that all pixels are set to 1 on this zone, all picture elements beyond this zone are set to 0, then obtain binaryzation time domain single movement subject area, use to (x, y, k) and to (x, y k-1) represent;
Step 11 couple to (x, y, k) and to (x, y k-1) carry out row and column and repair, and obtain complete time domain single movement subject area, use fto (x, y, k) and fto (x, y k-1) represent;
Step 12 fto (x, y, k) and fto (x, y k-1) go up to extract motion object external boundary profile point; Concrete grammar is: (a) from left to right, scan from top to bottom fto (x, y, k) and fto (x, y, k-1), ask the zone starting point s (k)=(x (k), y (k)), k=0; Picture element tracked on the current border is represented with c in (wherein k is the sequential value of the profile point that obtains, x (k), the coordinate figure that y (k) is ordered for k, s (k) represents profile k point) (2), puts c=s (k), and a note c left side 4 adjoint points are b (b is in connected region); (3) 88 adjoint points of the c that begins from b by note counterclockwise are respectively n
1, n
2, n
8(4) begin by finding first n counterclockwise from b
iThe point that belongs to connected region; (5) put c=s (k)=n
i, b=n
I-1(6) repeating step (3) (4) (5) is up to s (k)=s (0);
The fto that step 13 pair top obtains (x, y, k) and fto (x, y, k-1) motion object external boundary profile point, pointwise substitutes with the airspace boundary maximum point, and field object airspace boundary maximum point image when obtaining uses mp (x, y, k) and mp (x, y k-1) represent;
Step 14 couple och (x, y, k) and och (x, y k-1), extract the border with the canny boundary extraction method, obtain the spatial domain canny boundary image of local time domain region of variation, use cy (x, y, k) and cy (x, y k-1) represent;
Step 15 cy (x, y, k) and cy (x, y, k-1) on the figure, according to mp (x, y, k) and mp (x, y, k-1) connect cy (x, y, k) and cy (x, y, the k-1) border on obtains final spatial domain time domain fusional movement object external boundary, be the object video external boundary, use tsb (x, y, k) and tsb (x, y k-1) represent.
According to above step, adopt the MATLAB Programming with Pascal Language, can get to the end result by Computer Simulation.With the conventional Video Object Extraction method contrast of existing three frame difference methods and other as can be known: but adopt the inventive method complete extraction object video, and fast operation, strong robustness, process information is few, has efficient, wide application, adaptable effect.
Claims (1)
1, a kind of method for extraction method of video object external boundary is characterized in that it comprises the following step:
Step 1, to having carried out the two frame gray level images in succession of global motion compensation in the video flowing, use with Gaussian noise model and carry out the processing of frame difference as the seed region growth method of condition, obtain the binaryzation time domain frame difference image of anti-noise, its intermediate value is that 1 zone is the time domain region of variation;
Concrete grammar is: at first calculate the absolute difference on the two frame gray level images in succession that carried out global motion compensation, then greater than 40 definitely almost be set to 0, the average and the variance of statistics residue absolute difference use M to represent to remain the average of absolute difference, and A represents variance; The average M of the absolute difference image that obtains according to the top and the threshold value that variance A is provided with a M+4*A are the seed condition then, and it is the seed condition of growth that [M+A, M+4*A] interval range is set; Point by point search on the absolute difference image of the two frame gray level images in succession that carried out global motion compensation, absolute difference is set to the seed picture element greater than all picture elements of seed threshold value; Search at seed pixel neighborhood of a point then, the picture element that satisfies the seed condition of growth is set to the seed picture element; Searched for after all absolute difference images, obtained the zone of seed growth, should the zone on all picture elements be set to 1, all picture elements beyond this zone are set to 0, then obtain binaryzation time domain frame difference image;
Step 2, the binaryzation time domain frame difference image to utilizing the seed region growth method to generate in the step 1 carry out morphologic filtering and handle, and obtain the time domain region of variation image after the denoising;
Step 3, step 2 is obtained time domain region of variation image after the denoising, use the regional connectivity scaling method, obtain having the time domain region of variation image of sign;
Step 4, take out each sign on the region of variation image that step 3 obtains one by one with sign, the time domain region of variation image that has sign then by picture element scanning, if the sign of this picture element is the same with the sign of taking-up, then this picture element is set to 1, if different then be 0, there is not the picture element of sign to be set to 0; Form the single region of variation binary image of time domain of sign after the scanning, the single region of variation binary image of time domain of this sign of generation that each sign all will be one by one;
Step 5, the single region of variation binary image of time domain that obtains each sign with step 4 is a mask image, to the scanning of two frame gray level images in succession of carrying out global motion compensation in the video flowing in the step 1, the gray value that keeps on the mask image correspondence position that is 1 picture element on two frame gray level images in succession, be provided with on the mask image and be 0 picture element on two frame gray level images in succession the gray value of correspondence position is 0, obtain each after the been scanned and indicate pairing two single region of variation gray level images of time domain that only keep corresponding time domain region of variation gray value;
Step 6, calculation procedure 4 obtain the coordinate figure on four summits that the single region of variation binary image of time domain intermediate value is the maximum bounding rectangle in 1 zone;
Step 7, according to the coordinate figure on four summits of the maximum bounding rectangle of the single region of variation binary image of time domain of each sign that uses step 6 to obtain, the single region of variation gray level image of two time domains of each sign that obtains from step 5, extract the internal image of maximum bounding rectangle, form the time domain region of variation gray level image that each indicates pairing two parts;
Step 8, step 7 obtained the time domain region of variation gray level image of two corresponding parts of each sign, use phase correlation method, calculate the relative motion displacement of actual motion object on the time domain region of variation gray level image of the part of two same signs of correspondence that produces time domain region of variation local in the step 7;
The time domain region of variation gray level image of two parts of step 9, each sign that step 7 is obtained, the relative motion displacement of moving object on the time domain region of variation gray level image of two parts of this region of variation of generation that obtains with step 8 is as the pixel matching condition, use the seed region growth method, remove background occlusion area and background and appear the zone, generate time domain single movement subject area;
Concrete grammar: the relative motion displacement of moving object on the time domain region of variation gray level image of two parts of the time domain region of variation that each sign of generation that obtains according to step 8 is corresponding, the seed condition is set is: if the corresponding picture element on the time domain region of variation gray level image of two parts of sign differs absolute value less than 2 according to the relative motion displacement gray scale amplitude afterwards that compensates, and the gray scale amplitude less than not motion compensation differs absolute value, and then this picture element is the seed picture element; The seed condition of growth is set is: if the gray scale amplitude of the corresponding picture element on the time domain region of variation gray level image of two parts of sign after compensating according to the relative motion displacement differ absolute value greater than 2 less than 5, and the gray scale amplitude less than not motion compensation differs absolute value, and then this picture element is that seed increases picture element; Scanning step 7 obtains the picture element on the time domain region of variation gray level image of two parts of corresponding sign one by one, and the picture element that satisfies the seed condition is set to the seed picture element; Search at seed pixel neighborhood of a point then, the point that satisfies the seed condition of growth is set to the seed picture element; Searched for after the pixel of all local time domain region of variation gray level images, obtain the time domain single movement subject area of seed growth, be provided with that all pixels are set to 1 on this zone, all picture elements beyond this zone are set to 0, then obtain binaryzation time domain single movement subject area;
Step 10, to obtaining binaryzation time domain single movement subject area in the step 9, use method for repairing and mending, obtain complete time domain single movement subject area;
Concrete method for repairing and mending: binaryzation time domain single movement subject area is carried out row and column scanning, be not the picture element of motion object and again the object picture element between colleague or two motion object pixels of same column be set to motion object picture element, other picture element is constant, obtains complete time domain single movement subject area;
Step 11, the complete time domain single movement subject area to generating in the step 10 are used the outline boundary extraction method, obtain motion object external boundary profile point;
Step 12, to the motion object external boundary profile point that step 11 obtains, pointwise substitutes with the airspace boundary maximum point, field object airspace boundary maximum point image when obtaining;
Concrete grammar: 8 neighborhoods that calculate time domain boundary profile point carry out the spatial domain gradient, spatial domain gradient with the current time domain boundary point that contains weights W compares then, described weights W is chosen as a numeral greater than 1 and strengthens result in time domain, the point that selection has a maximum spatial domain gradient as the time field object airspace boundary maximum point, field object airspace boundary maximum point image when obtaining;
The time domain region of variation gray level image of two parts of step 13, sign correspondence that step 7 is obtained uses the canny boundary extraction method to extract the border, obtains the spatial domain canny boundary image of local time domain region of variation;
Step 14, in step 13 on the spatial domain canny boundary image of local time domain region of variation, the time field object airspace boundary maximum point image according to step 12 obtains carries out attended operation, obtains spatial domain time domain fusional movement object external boundary;
Concrete attended operation is: the canny border, spatial domain that keeps the local time domain region of variation that contains the airspace boundary maximum point, remove the canny border, spatial domain of the local time domain region of variation that does not contain the airspace boundary maximum point, form spatial domain time domain fusional movement object external boundary.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2005100215413A CN100446544C (en) | 2005-08-26 | 2005-08-26 | Method for extraction method of video object external boundary |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2005100215413A CN100446544C (en) | 2005-08-26 | 2005-08-26 | Method for extraction method of video object external boundary |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1921560A CN1921560A (en) | 2007-02-28 |
CN100446544C true CN100446544C (en) | 2008-12-24 |
Family
ID=37779104
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2005100215413A Expired - Fee Related CN100446544C (en) | 2005-08-26 | 2005-08-26 | Method for extraction method of video object external boundary |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN100446544C (en) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101184235B (en) * | 2007-06-21 | 2010-07-28 | 腾讯科技(深圳)有限公司 | Method and apparatus for implementing background image extraction from moving image |
CN100495274C (en) * | 2007-07-19 | 2009-06-03 | 上海港机重工有限公司 | Control method for automatic drive of large engineering vehicle and system thereof |
JP4683079B2 (en) * | 2008-07-07 | 2011-05-11 | ソニー株式会社 | Image processing apparatus and method |
CN101930593B (en) * | 2009-06-26 | 2012-11-21 | 鸿富锦精密工业(深圳)有限公司 | Single object image extracting system and method |
GB0919235D0 (en) * | 2009-11-03 | 2009-12-16 | De Beers Centenary AG | Inclusion detection in polished gemstones |
CN102111530B (en) * | 2009-12-24 | 2013-01-02 | 财团法人工业技术研究院 | Device and method for movable object detection |
JP5683888B2 (en) * | 2010-09-29 | 2015-03-11 | オリンパス株式会社 | Image processing apparatus, image processing method, and image processing program |
FI20106090A0 (en) * | 2010-10-21 | 2010-10-21 | Zenrobotics Oy | Procedure for filtering target image images in a robotic system |
CN102855642B (en) * | 2011-06-28 | 2018-06-15 | 富泰华工业(深圳)有限公司 | The extracting method of image processing apparatus and its contour of object |
CN102263955B (en) * | 2011-07-21 | 2013-04-03 | 福建星网视易信息系统有限公司 | Method for detecting video occlusion based on motion vectors |
DE102013205882A1 (en) * | 2013-04-03 | 2014-10-09 | Robert Bosch Gmbh | Method and device for guiding a vehicle around an object |
US8917940B2 (en) * | 2013-04-26 | 2014-12-23 | Mitutoyo Corporation | Edge measurement video tool with robust edge discrimination margin |
US10674180B2 (en) * | 2015-02-13 | 2020-06-02 | Netflix, Inc. | Techniques for identifying errors introduced during encoding |
CN105731260A (en) * | 2016-01-20 | 2016-07-06 | 上海振华重工电气有限公司 | Automatic driving system and method of rubber tyred gantry container crane |
CN106091984B (en) * | 2016-06-06 | 2019-01-25 | 中国人民解放军信息工程大学 | A kind of three dimensional point cloud acquisition methods based on line laser |
CN107067012B (en) * | 2017-04-25 | 2018-03-16 | 中国科学院深海科学与工程研究所 | Submarine geomorphy cell edges intelligent identification Method based on image procossing |
CN110647821B (en) * | 2019-08-28 | 2023-06-06 | 盛视科技股份有限公司 | Method and device for object identification through image identification |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07225847A (en) * | 1994-02-10 | 1995-08-22 | Fujitsu General Ltd | Image extracting method |
CN1411282A (en) * | 2001-10-08 | 2003-04-16 | Lg电子株式会社 | Method for extracting target area |
CN1497494A (en) * | 2002-10-17 | 2004-05-19 | 精工爱普生株式会社 | Method and device for segmentation low depth image |
JP2005039485A (en) * | 2003-07-18 | 2005-02-10 | Ricoh Co Ltd | Image extracting method, program, recording medium, and image processing apparatus |
-
2005
- 2005-08-26 CN CNB2005100215413A patent/CN100446544C/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07225847A (en) * | 1994-02-10 | 1995-08-22 | Fujitsu General Ltd | Image extracting method |
CN1411282A (en) * | 2001-10-08 | 2003-04-16 | Lg电子株式会社 | Method for extracting target area |
CN1497494A (en) * | 2002-10-17 | 2004-05-19 | 精工爱普生株式会社 | Method and device for segmentation low depth image |
JP2005039485A (en) * | 2003-07-18 | 2005-02-10 | Ricoh Co Ltd | Image extracting method, program, recording medium, and image processing apparatus |
Also Published As
Publication number | Publication date |
---|---|
CN1921560A (en) | 2007-02-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN100446544C (en) | Method for extraction method of video object external boundary | |
CN102184550B (en) | Mobile platform ground movement object detection method | |
CN102222346B (en) | Vehicle detecting and tracking method | |
CN102289948B (en) | Multi-characteristic fusion multi-vehicle video tracking method under highway scene | |
CN107767400B (en) | Remote sensing image sequence moving target detection method based on hierarchical significance analysis | |
CN104463903B (en) | A kind of pedestrian image real-time detection method based on goal behavior analysis | |
CN108198201A (en) | A kind of multi-object tracking method, terminal device and storage medium | |
CN106952286A (en) | Dynamic background Target Segmentation method based on motion notable figure and light stream vector analysis | |
CN103871076A (en) | Moving object extraction method based on optical flow method and superpixel division | |
CN102270346A (en) | Method for extracting target object from interactive video | |
CN103559237A (en) | Semi-automatic image annotation sample generating method based on target tracking | |
CN109146912A (en) | A kind of visual target tracking method based on Objective analysis | |
CN109934224A (en) | Small target detecting method based on markov random file and visual contrast mechanism | |
CN105488811A (en) | Depth gradient-based target tracking method and system | |
CN106991686A (en) | A kind of level set contour tracing method based on super-pixel optical flow field | |
CN106204633A (en) | A kind of student trace method and apparatus based on computer vision | |
CN106503683A (en) | A kind of video well-marked target detection method based on dynamic focal point | |
Liu et al. | A multiscale deep feature for the instance segmentation of water leakages in tunnel using MLS point cloud intensity images | |
CN104574435B (en) | Based on the moving camera foreground segmentation method of block cluster | |
Huang et al. | A Stepwise Refining Image-Level Weakly Supervised Semantic Segmentation Method for Detecting Exposed Surface for Buildings (ESB) From Very High-Resolution Remote Sensing Images | |
CN114067359B (en) | Pedestrian detection method integrating human body key points and visible part attention characteristics | |
CN115273080A (en) | Lightweight visual semantic odometer method for dynamic scene | |
Liu et al. | Background priors based saliency object detection | |
CN106022314A (en) | Detection method for rigid target loss in intelligent video monitoring | |
Abdein et al. | Self-supervised learning of optical flow, depth, camera pose and rigidity segmentation with occlusion handling |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20081224 Termination date: 20110826 |