Nothing Special   »   [go: up one dir, main page]

CN104318266A - Image intelligent analysis processing early warning method - Google Patents

Image intelligent analysis processing early warning method Download PDF

Info

Publication number
CN104318266A
CN104318266A CN201410561472.4A CN201410561472A CN104318266A CN 104318266 A CN104318266 A CN 104318266A CN 201410561472 A CN201410561472 A CN 201410561472A CN 104318266 A CN104318266 A CN 104318266A
Authority
CN
China
Prior art keywords
image
value
block
pixel
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410561472.4A
Other languages
Chinese (zh)
Other versions
CN104318266B (en
Inventor
刘迅
陈宁华
叶修梓
洪振杰
张三元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wenzhou University
Original Assignee
Wenzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wenzhou University filed Critical Wenzhou University
Priority to CN201410561472.4A priority Critical patent/CN104318266B/en
Publication of CN104318266A publication Critical patent/CN104318266A/en
Application granted granted Critical
Publication of CN104318266B publication Critical patent/CN104318266B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image intelligent analysis processing early warning method. The image intelligent analysis processing early warning method is used to survey an environment near an electric pole. The image intelligent analysis processing early warning method uses yellow weight weighting graying as a main image processing method, uses a horizontal and vertical projection proportion to rectify influences from a camera to an image, confirms an outline of an alarm area through corrosion expansion and minimum sensitive block area parameters, and finally fuses color features, and normalizes four HOG features obtained from image resizing to detect a warning object. Furthermore, a learning mechanism is introduced to dynamically optimize training parameters. The image intelligent analysis processing early warning method completes detection of the warning object by analyzing extraction of the HOG features of an engineering vehicle, adding color frequency features into HOG feature vectors and using the color frequency features and the HOG feature vectors together, and forms the HOG feature vectors with the color features so as to improve direction success rate.

Description

A kind of image intelligent analyzing and processing method for early warning
Technical field
The present invention about image processing method, and has related to a kind of image intelligent analyzing and processing method for early warning.
Background technology
Background subtraction distinguishes foreground object and background object by the comparison of present frame and background model, by comparing the region obtaining marked change occurs in background model, then utilizes connected component analysis method to mark foreground object region.Image is divided into the image block of fixed size by the people such as MONNET A, adopt online autoregressive model to set up background model, the people such as ZHONG J adopt the outward appearance of Kalman (Kalman) wave filter to dynamic area to estimate, and the region by regulating the threshold value of weighting function to obtain foreground object.
In natural image, the detection and indentification of attention object is a study hotspot in computer vision always, and there are pedestrian detection and Face datection in the field of comparative maturity.Pedestrian's feature extraction has symmetrical and edge density, HOG (histogram of gradients) etc., and wherein HOG shows excellent detection perform in pedestrian extracts, but its characteristics of human body's vector dimension extracted is comparatively large, has a strong impact on system speed.
Detection method is primarily of two classes: feature based and based on classification.Method wherein based on classification is generally used, and mainly contains based on Adaboost (self-adaptation enhancing), based on support vector machine and use neural network etc.
Industrial standard gray processing and HSL, HSV consider the visual sensory factor of people, adjust R (red), G (green), B (blue) each component when gray processing to the contribution of gray-scale value.HSL and HSV is two kinds of related expressions to rgb color space mid point, and they attempt describing than RGB aware colors contact more accurately, and still keep computationally simple.
HOG feature is a kind of regional area descriptor, and it carrys out the characteristic model of constructed object by calculating object edge gradient orientation histogram on regional area, and the alignment sensitivity little with amount to illumination variation is low.
Summary of the invention
The object of the invention is to overcome the deficiencies in the prior art, a kind of image intelligent analyzing and processing method for early warning is provided.By the model of image degradation/recuperation, image restoration process is carried out to original coloured image.And light sensitivity unification is carried out to front and back two two field picture, removes the impact of illuminance, shade and other weather.
Technical scheme of the present invention is as follows:
A kind of image intelligent analyzing and processing method for early warning comprises the steps:
1) carry out image restoration process to original coloured image, remove the impact of illuminance and shade, before and after ensureing, the average of the pixel value of picture is equal, reaches image light sensitivity unification;
2) to two two field picture I after process k, I k+1the three-channel value of RGB compare with luminance parameter V respectively, for yellow weight v bask gray-scale value G:
Wherein G is gray-scale value, and r, g, b represent the red pixel value of each coordinate respectively, green pixel values and blue pixel value, v rfor the weight of differentiated red channel, its value is 1.0, v gfor the weight of differentiated green channel, its value is 1.0, v bfor the weight of differentiated yellow channels, be set as autonomous learning parameter, its initial value is set as that 0.18, V is luminance parameter, is set as autonomous learning parameter;
3) to step 2) process after two two field picture I k, I k+1adopt transverse and longitudinal projection than the impact of correcting disturbance, correcting the method for disturbance is that the shift value X that ordinate that basis the is tried to achieve shift value Y that should adjust and horizontal ordinate should adjust carries out translation to two two field pictures;
The computing formula of the shift value Y that ordinate should adjust is:
SumMin [ m i ] = min - d ≤ t ≤ d { Σ x = t + m i t + m i + N Σ y = 0 C | I k [ x ] [ y ] - I k + 1 [ x ] [ y ] | C } . - - - ( 2 )
KMin[m i]=t. (3)
Y = Σ m i KMin [ m i ] M . - - - ( 4 )
Wherein, d is between perturbing area, and scope is in [-5,5], and M is the hop count of image ordinate projection, and m is the initial value of each section, and t is the displacement of minimum differentiation in this section, the length of the block of pixels during N is each section, and C is the col width of image, I kthe pixel value that [x] [y] puts at coordinate (x, y) for image k, SumMin [m i] be that pixel minimum in 2d the pixel translation of i-th section is poor, KMin [m i] be pixel shift value during minimum pixel difference, Y is the shift value that ordinate should adjust, and in like manner tries to achieve the shift value X that horizontal ordinate should adjust;
4) ask binary-state threshold B, the binary-state threshold B tried to achieve is step 3) process after image binaryzation, obtain binary image, wherein binary-state threshold B is drawn by formula (5):
B = 255 * V * T L * N . - - - ( 5 )
To step 3) process after two two field picture I k, I k+1carry out difference, take absolute value, obtain difference gray-scale map E, the value I of binary image bdrawn by formula (6):
I b = 0 ( I d < B ) 255 ( I d &GreaterEqual; B ) - - - ( 6 )
Wherein, L is the line number of image, the length of the block of pixels that N compares in being each section, and T is binaryzation parameter, is used for the correctness of binary-state threshold B, and setting T is autonomous learning parameter, I dfor the value of difference gray level image E, I bfor the value after image binaryzation process, then, operation is carried out to determine alarm region to the irregular sensitive blocks of the white in the binary image obtained;
5) to step 4) alarm region obtained after process carries out the extraction of HOG proper vector,
6) color frequency feature is added the function that HOG proper vector completes image early warning jointly;
7) add study feedback mechanism, set up 4 autonomous learning parameters, be respectively luminance threshold V, yellow weight v b, binaryzation parameter T and area threshold S, according to academic environment to 4 autonomous learning parameters carry out study adjustment, guarantee the accuracy of early warning and monitoring.
Described operation is carried out to the irregular sensitive blocks of the white in the binary image obtained be specially: adopt the convolution kernel of 3 × 3 first to corrode rear expansion to the irregular sensitive blocks of white in binary image, number of times is n time, dispose outermost one deck pixel at every turn, obtain the block P of rule relatively, if block P does not exist, then illustrate that present image is not reported to the police object, reads in next frame image; If block P exists, then the area of computing block P, and remove the block that area is less than threshold value S, S is autonomous learning parameter, obtains suitable sensitive blocks P, it is carried out to the detection of attention object.
Described step 5) be specially:
A) alarm region window is divided into equably the Cell of adjacent pixel, 9 gradient direction blocks are divided into by 360 of each Cell degree, the gradient amplitude of all pixels in interval each in each Cell is carried out statistics with histogram, obtains 9 dimensional feature vectors of Cell;
B) every 4 adjacent Cell are formed the Block of a pixel, in a Block, all Cell features are together in series and obtain 36 dimensional feature vectors of this Block, and the scanning step of Block is 1 Cell;
C) in alarm region video in window, the proper vector of all Block is together in series, and obtains the proper vector of this objects in images, i.e. HOG proper vector.
Described step 6) be specially: each Cell three-dimensional histogram adds up R, G, B tri-color spaces respectively by the number of times selected, in each block feature, obtain a three-dimensional color frequency proper vector with the histogram of all Cell, in each block feature of HOG proper vector, finally add the color frequency proper vector of this three-dimensional.
Described feedback mechanism comprises three selections, and first is the whether normal BA of alarm region, comprises normal, bigger than normal and less than normal; Second is whether yellow building reports YW by mistake, comprises True and False; Whether fail to report OA, comprise True and False for 3rd;
4 autonomous learning parameters can carry out dynamic optimization parameter values by following steps:
I) often process an image, user from main separation feedback processing result,
Ii) record each feedback result with BA, YW, OA, record Times of Feedback Sum and alarm times ASum simultaneously, if Sum and ASum is greater than process threshold value H and AH respectively, enter the 3rd) step; Otherwise 4 learning parameters are without the need to amendment;
Iii) carry out statistical study to feedback result, dynamic optimization 4 autonomous learning parameters, the rule of 4 learning parameter changes is:
When alarm region is bigger than normal, increase the value of luminance threshold V, otherwise reduce the value of luminance threshold V;
After the number of times of yellow building wrong report reaches the value of setting, adjust yellow weight v bvalue;
When fail to report number of times in the value range set time, the value of adjustment area threshold S; Otherwise the value of adjustment binaryzation parameter T.
According to experimental analysis of the present invention, traditional industrial gray processing and HSL and HSV general for the transform effect of most of outdoor images of this early warning system, and by observing a large amount of early warning system outdoor environments, warning object is concentrated on engineering truck, proposes and obtain algorithm idea based on yellow Weight gray processing.
Propose camera disturbance to correct, due to the impact of outdoor work environment, may there is the movement of pixel scale in camera, in order to avoid camera disturbance is on the impact of front and back two two field picture difference, adopts transverse and longitudinal projection than correcting disturbance to front and back two frame gray-scale map.
The present invention by analyzing the HOG feature extraction of engineering truck, and adds color frequency feature the detection that HOG proper vector completes warning object jointly.Form the HOG proper vector with color characteristic, to improve the success ratio of detection.
In practical application, along with region gap, the climatic environment illumination etc. of external procedure alters a great deal, some calculating parameters can not be unified training and draw, but need the parameter different according to different presetting bit environment sets, so introduce learning files to store the corresponding parameter of each presetting bit, be the continuous change in order to conform, rely on a kind of feedback mechanism simultaneously, carry out autonomous learning and train these parameters.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of image intelligent analyzing and processing method for early warning;
Fig. 2 is the schematic flow sheet of image processing method;
Fig. 3 is the illustraton of model of image degradation of the present invention and recuperation;
Fig. 4 is the HOG characteristic vector pickup figure of target object;
Fig. 5 is the destination object figure that the present invention detects.
Embodiment
As shown in Figure 1, it is the schematic flow sheet of image intelligent analyzing and processing method for early warning.Acquisition for picture needs the method by image procossing, as shown in Figure 2.
In the image processing process of following examples, first extract and need the target of detection and simple image restoration process is carried out to its original coloured image, consider that the mistiming of front and back two frame is more than or equal to 30 minutes, and outdoor optical, according within 30-90 minute, altering a great deal, needs to carry out light sensitivity unification to front and back two two field picture.Remove the impact of illuminance, shade and other weather.It obtains Target Photo (step 101) by the method for degenrate function and recovery filtering.Algorithm model please refer to Fig. 3.
Consider the visual sensory factor of people, adjustment R (red), G (green), B (blue) each component is needed to the contribution of gray-scale value, to add the algorithm idea (step 103) of yellow weighted intensity when gray processing.To the front and back two two field picture I after light sensitivity unification k, I k+1rGB triple channel compare with luminance threshold V (autonomous learning parameter) respectively, ask gray-scale value for yellow Weight, the pass of expression is:
G = V if ( r , g , b > V ) v r r + v g g + v b b v r + v g + v b else . - - - ( 1 )
Wherein G is gray-scale value, and r, g, b represent the red pixel value of each coordinate respectively, green pixel values and blue pixel value, v rfor the weight of differentiated red channel, its value is 1.0, v gfor the weight of differentiated green channel, its value is 1.0, v bfor the weight of differentiated yellow channels, be set as autonomous learning parameter,
Its initial value is set as that 0.18, V is luminance parameter, is set as autonomous learning parameter.Because the rgb value of yellow is (255,255,0), the initial weight of setting yellow is v r=1.0, v g=1.0, v b=0.18, consider that outdoor environment exists the possibility of yellow chaff interference, setting v bfor autonomous learning parameter.
To two frame gray-scale map I k, I k+1adopt transverse and longitudinal projection than carrying out disturbance rectification, first, ordinate skew is processed, image ordinate is projected, then be divided into M section, find out a N pixel fragment for every section and carry out 2D pixel upper and lower translation, find out the displacement t of minimum differentiation in this section, then M t of M section is averaging, and obtains value Y and is the pixel (step 105) that ordinate should adjust.The pass of expressing is:
SumMin [ m i ] = min - d &le; t &le; d { &Sigma; x = t + m i t + m i + N &Sigma; y = 0 C | I k [ x ] [ y ] - I k + 1 [ x ] [ y ] | C } . - - - ( 2 )
KMin[m i]=t. (3)
Y = &Sigma; m i KMin [ m i ] M . - - - ( 4 )
Wherein, d is between perturbing area, and scope is in [-5,5], and M is the hop count of image ordinate projection, and m is the initial value of each section, and t is the displacement of minimum differentiation in this section, the length of the block of pixels during N is each section, and C is the col width of image, I kthe pixel value that [x] [y] puts at coordinate (x, y) for image k, SumMin [m i] be that pixel minimum in 2d the pixel translation of i-th section is poor, KMin [m i] be pixel shift value during minimum pixel difference, Y is the shift value that ordinate should adjust, and in like manner tries to achieve the shift value X that horizontal ordinate should adjust, and its pass of expressing is:
SumMin [ m i ] = min - d &le; t &le; d { &Sigma; x = t + m i t + m i + N &Sigma; y = 0 C | I k [ x ] [ y ] - I k + 1 [ x ] [ y ] | C } . - - - ( 5 )
KMin[m i]=t. (6)
X = &Sigma; m i KMin [ m i ] M . - - - ( 7 )
Image binaryzation threshold value for different light sensitivity is not identical, needs the formula that depends on (2) to try to achieve the minimum value V of variance in M minimum pixel difference, then tries to achieve binary-state threshold B, as follows:
B = 255 * V * T L * N . - - - ( 8 )
Wherein L is the line number of image, and N, see formula (2), is binaryzation parameter, can be used for the correctness of binary-state threshold B, and setting T is autonomous learning parameter.
Obtain Target Photo (step 107), front and back two two field picture that same presetting bit feedback is come is carried out difference, and form difference image and analyze alarm region, concrete steps are as follows:
Step one: repeatedly after circular treatment, the system pre-stored metastable Background I of current presetting bit k, detected image I k+1with I kcarry out difference based on yellow weighted intensity, take absolute value, obtain difference gray-scale map I d.
Step 2: based on yellow weight gray level image, determine binary-state threshold B, gray-scale map I dbinaryzation, obtains binary image I b, binaryzation formula is as follows:
I b = 0 if ( I d < B ) 255 if ( I d &GreaterEqual; B ) . - - - ( 9 )
Step 3: to binary image I bthe irregular sensitive blocks of middle white adopts the convolution kernel of 3 × 3 first to corrode rear expansion, number of times is secondary, dispose outermost one deck pixel at every turn, obtain the block P that some are relatively regular, if P does not exist, then illustrate that present image is not reported to the police object, reads in next frame image, if P exists, then enter next step.
Step 4: the area calculating these blocks P, remove the block being less than area threshold S, S is autonomous learning parameter, obtains the sensitive blocks P that the possibility of some suitable interest objects existence is large, carries out detection attention object to these little sensitive blocks.
After determining alarm region, detection alarm object, by analyzing the HOG feature extraction of engineering truck, in image I, the gradiometer formula of pixel I (x, y) is as follows:
G x(x,y)=I(x+1,y)-I(x-1,y). (10)
G x(x,y)=I(x,y+1)-I(x,y-1). (11)
Wherein I (x, y) is pixel, G x(x, y), G ylevel, vertical gradient that (x, y) is pixel (x, y) place, calculate gradient amplitude and the direction of pixel subsequently:
G ( x , y ) = G x ( x , y ) 2 + G y ( x , y ) 2 . - - - ( 12 )
&theta; ( x , y ) = cot - 1 G x ( x , y ) G x ( x , y ) . - - - ( 13 )
The characteristic extraction step of Target Photo is as follows:
A) alarm region window is divided into equably the Cell of adjacent pixel, 9 gradient direction blocks are divided into by 360 of each Cell degree, the gradient amplitude of all pixels in interval each in each Cell is carried out statistics with histogram, obtains 9 dimensional feature vectors of Cell;
B) every 4 adjacent Cell are formed the Block of a pixel, in a Block, all Cell features are together in series and obtain 36 dimensional feature vectors of this Block, and the scanning step of Block is 1 Cell;
C) in alarm region video in window, the proper vector of all Block is together in series, and obtains the proper vector of this objects in images, i.e. HOG proper vector.As shown in Figure 4.
Except HOG feature extraction, add color characteristic and jointly complete detection, engineering truck due to 85% and the colour of skin of facility are yellow or crocus, it can thus be appreciated that the colour of skin of engineering truck and facility has good cluster, can be used for the detection of auxiliary HOG feature.HOG feature in the present invention calculates based on gray-scale map, so while calculating Gray-level Map Features, need the synchronous color frequency feature calculating RGB cromogram on same position, here choose color space that in Cell, in each minizone, gradient is maximum to calculate gradient and the direction of each pixel in this minizone, the colouring information of this minizone just can be selected number of times represents with color space.Step is: each Cell can add up R, G, B tri-color spaces by the number of times selected respectively with the histogram of one 3 dimension, the histogram that each block combines wherein all Cell obtains one the 3 color frequency vector tieed up, the last proper vector adding this 3 dimension in each block feature of HOG feature, obtains the HOG proper vector with color characteristic.
The continuous change conformed, relies on a kind of feedback mechanism simultaneously, carrys out autonomous learning and train these parameters.Here there are 4 autonomous learning parameters, are respectively luminance threshold V, yellow weight v b, binaryzation parameter T and area threshold S.Devise a kind of feedback mechanism, by the feedback of counting user to system alarm result, carry out dynamic optimization autonomous learning parameter, step is as follows:
Step one: often process an image, user can feedback processing result (also can not feed back), whether whether feedback module comprises three selections, be " alarm region normal (normal, bigger than normal, less than normal) ", " whether having yellow building wrong report (True, False) ", " failing to report (True, False) " respectively.
Step 2: record each feedback result with BA, YW, OA, records Times of Feedback Sum and alarm times ASum simultaneously, if Sum and ASum is greater than process threshold value H and AH respectively, enters step 3.
Step 3: statistical study is carried out to feedback result, dynamic optimization 4 autonomous learning parameters.The rule of 4 learning parameter changes is:
When alarm region is bigger than normal, increase the value of luminance threshold V, otherwise reduce the value of luminance threshold V.
After the number of times of yellow building wrong report reaches the value of setting, adjust yellow weight v bvalue.
When fail to report number of times in the value range set time, the value of adjustment area threshold S.Otherwise the value of adjustment binaryzation parameter T.

Claims (5)

1. an image intelligent analyzing and processing method for early warning, is characterized in that comprising the steps:
1) carry out image restoration process to original coloured image, remove the impact of illuminance and shade, before and after ensureing, the average of the pixel value of picture is equal, reaches image light sensitivity unification;
2) to two two field picture I after process k, I k+1the three-channel value of RGB compare with luminance parameter V respectively, for yellow weight v bask gray-scale value G:
Wherein G is gray-scale value, and r, g, b represent the red pixel value of each coordinate respectively, green pixel values and blue pixel value, v rfor the weight of differentiated red channel, its value is 1.0, v gfor the weight of differentiated green channel, its value is 1.0, v bfor the weight of differentiated yellow channels, be set as autonomous learning parameter, its initial value is set as that 0.18, V is luminance parameter, is set as autonomous learning parameter;
3) to step 2) process after two two field picture I k, I k+1adopt transverse and longitudinal projection than the impact of correcting disturbance, correcting the method for disturbance is that the shift value X that ordinate that basis the is tried to achieve shift value Y that should adjust and horizontal ordinate should adjust carries out translation to two two field pictures;
The computing formula of the shift value Y that ordinate should adjust is:
SumMin [ m i ] = min - d &le; t &le; d { &Sigma; x = t + m i t + m i + N &Sigma; y = 0 C | I k [ k ] [ y ] - I k + 1 [ x ] [ y ] | C } . - - - ( 2 )
KMin[m i]=t. (3)
Y = &Sigma; m i KMin [ m i ] M . - - - ( 4 )
Wherein, d is between perturbing area, and scope is in [-5,5], and M is the hop count of image ordinate projection, and m is the initial value of each section, and t is the displacement of minimum differentiation in this section, the length of the block of pixels during N is each section, and C is the col width of image, I kthe pixel value that [x] [y] puts at coordinate (x, y) for image k, SumMin [m i] be that pixel minimum in 2d the pixel translation of i-th section is poor, KMin [m i] be pixel shift value during minimum pixel difference, Y is the shift value that ordinate should adjust, and in like manner tries to achieve the shift value X that horizontal ordinate should adjust;
4) ask binary-state threshold B, the binary-state threshold B tried to achieve is step 3) process after image binaryzation, obtain binary image, wherein binary-state threshold B is drawn by formula (5):
B = 255 * V * T L * N . - - - ( 5 )
To step 3) process after two two field picture I k, I k+1carry out difference, take absolute value, obtain difference gray-scale map E, the value I of binary image H bdrawn by formula (6):
I b = 0 ( I d < B ) 255 ( I d &GreaterEqual; B ) - - - ( 6 )
Wherein, L is the line number of image, the length of the block of pixels that N compares in being each section, and T is binaryzation parameter, is used for the correctness of binary-state threshold B, and setting T is autonomous learning parameter, I dfor the value of difference gray level image E, I bfor the value after image binaryzation process, then, operation is carried out to determine alarm region to the irregular sensitive blocks of the white in the binary image obtained;
5) to step 4) alarm region obtained after process carries out the extraction of HOG proper vector,
6) color frequency feature is added the function that HOG proper vector completes image early warning jointly;
7) add study feedback mechanism, set up 4 autonomous learning parameters, be respectively luminance threshold V, yellow weight v b, binaryzation parameter T and area threshold S, according to academic environment to 4 autonomous learning parameters carry out study adjustment, guarantee the accuracy of early warning and monitoring.
2. image processing method as claimed in claim 1, it is characterized in that described carry out operation to the irregular sensitive blocks of the white in the binary image obtained and being specially: adopt the convolution kernel of 3 × 3 first to corrode rear expansion to the irregular sensitive blocks of white in binary image, number of times is n time, dispose outermost one deck pixel at every turn, obtain the block P of rule relatively, if block P does not exist, then illustrate that present image is not reported to the police object, read in next frame image; If block P exists, then the area of computing block P, and remove the block that area is less than threshold value S, S is autonomous learning parameter, obtains suitable sensitive blocks P, it is carried out to the detection of attention object.
3. image processing method as claimed in claim 1, is characterized in that described step 5) be specially:
A) alarm region window is divided into equably the Cell of adjacent pixel, 9 gradient direction blocks are divided into by 360 of each Cell degree, the gradient amplitude of all pixels in interval each in each Cell is carried out statistics with histogram, obtains 9 dimensional feature vectors of Cell;
B) every 4 adjacent Cell are formed the Block of a pixel, in a Block, all Cell features are together in series and obtain 36 dimensional feature vectors of this Block, and the scanning step of Block is 1 Cell;
C) in alarm region video in window, the proper vector of all Block is together in series, and obtains the proper vector of this objects in images, i.e. HOG proper vector.
4. image processing method as claimed in claim 1, it is characterized in that described step 6) be specially: each Cell three-dimensional histogram adds up R, G, B tri-color spaces respectively by the number of times selected, in each block feature, a three-dimensional color frequency proper vector is obtained with the histogram of all Cell
The last color frequency proper vector adding this three-dimensional in each block feature of HOG proper vector.
5. image processing method as claimed in claim 1, is characterized in that described feedback mechanism comprises three selections, and first is the whether normal BA of alarm region, comprises normal, bigger than normal and less than normal; Second is whether yellow building reports YW by mistake, comprises True and False; Whether fail to report OA, comprise True and False for 3rd;
4 autonomous learning parameters can carry out dynamic optimization parameter values by following steps:
I) often process an image, user from main separation feedback processing result,
Ii) record each feedback result with BA, YW, OA, record Times of Feedback Sum and alarm times ASum simultaneously, if Sum and ASum is greater than process threshold value H and AH respectively, enter the 3rd) step; Otherwise 4 learning parameters are without the need to amendment;
Iii) carry out statistical study to feedback result, dynamic optimization 4 autonomous learning parameters, the rule of 4 learning parameter changes is:
When alarm region is bigger than normal, increase the value of luminance threshold V, otherwise reduce the value of luminance threshold V;
After the number of times of yellow building wrong report reaches the value of setting, adjust yellow weight v bvalue;
When fail to report number of times in the value range set time, the value of adjustment area threshold S; Otherwise the value of adjustment binaryzation parameter T.
CN201410561472.4A 2014-10-19 2014-10-19 A kind of image intelligent analyzes and processes method for early warning Expired - Fee Related CN104318266B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410561472.4A CN104318266B (en) 2014-10-19 2014-10-19 A kind of image intelligent analyzes and processes method for early warning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410561472.4A CN104318266B (en) 2014-10-19 2014-10-19 A kind of image intelligent analyzes and processes method for early warning

Publications (2)

Publication Number Publication Date
CN104318266A true CN104318266A (en) 2015-01-28
CN104318266B CN104318266B (en) 2017-06-13

Family

ID=52373495

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410561472.4A Expired - Fee Related CN104318266B (en) 2014-10-19 2014-10-19 A kind of image intelligent analyzes and processes method for early warning

Country Status (1)

Country Link
CN (1) CN104318266B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104809438A (en) * 2015-04-29 2015-07-29 腾讯科技(深圳)有限公司 Method and device for detecting electronic eyes
CN106274690A (en) * 2016-08-25 2017-01-04 乐视控股(北京)有限公司 A kind of vehicle periphery scene monitoring method and system
CN108074370A (en) * 2016-11-11 2018-05-25 国网湖北省电力公司咸宁供电公司 The early warning system and method that a kind of anti-external force of electric power transmission line based on machine vision is destroyed
CN109348865A (en) * 2018-11-13 2019-02-19 杭州电子科技大学 A kind of vermillion orange picking robot and its picking method
CN111033523A (en) * 2017-09-21 2020-04-17 国际商业机器公司 Data enhancement for image classification tasks
CN111369938A (en) * 2020-04-26 2020-07-03 上海年润光电科技有限公司 Modularized LED display screen and control system thereof
CN112102280A (en) * 2020-09-11 2020-12-18 哈尔滨市科佳通用机电股份有限公司 Method for detecting loosening and loss faults of small part bearing key nut of railway wagon
CN112132969A (en) * 2020-09-01 2020-12-25 济南市房产测绘研究院(济南市房屋安全检测鉴定中心) Vehicle-mounted laser point cloud building target classification method
CN113344874A (en) * 2021-06-04 2021-09-03 温州大学 Pedestrian boundary crossing detection method based on Gaussian mixture modeling

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831409A (en) * 2012-08-30 2012-12-19 苏州大学 Method and system for automatically tracking moving pedestrian video based on particle filtering
CN103489012A (en) * 2013-09-30 2014-01-01 深圳市捷顺科技实业股份有限公司 Crowd density detecting method and system based on support vector machine
JP2015184944A (en) * 2014-03-25 2015-10-22 株式会社明電舎 Person detection device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831409A (en) * 2012-08-30 2012-12-19 苏州大学 Method and system for automatically tracking moving pedestrian video based on particle filtering
CN103489012A (en) * 2013-09-30 2014-01-01 深圳市捷顺科技实业股份有限公司 Crowd density detecting method and system based on support vector machine
JP2015184944A (en) * 2014-03-25 2015-10-22 株式会社明電舎 Person detection device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
O. DÉNIZ,ETC: "Face recognition using Histograms of Oriented Gradients", 《PATTERN RECOGNITION LETTERS》 *
曲永宇等: "基于HOG和颜色特征的行人检测", 《武汉理工大学学报》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104809438B (en) * 2015-04-29 2018-02-23 腾讯科技(深圳)有限公司 A kind of method and apparatus for detecting electronic eyes
CN104809438A (en) * 2015-04-29 2015-07-29 腾讯科技(深圳)有限公司 Method and device for detecting electronic eyes
CN106274690A (en) * 2016-08-25 2017-01-04 乐视控股(北京)有限公司 A kind of vehicle periphery scene monitoring method and system
CN108074370A (en) * 2016-11-11 2018-05-25 国网湖北省电力公司咸宁供电公司 The early warning system and method that a kind of anti-external force of electric power transmission line based on machine vision is destroyed
CN111033523A (en) * 2017-09-21 2020-04-17 国际商业机器公司 Data enhancement for image classification tasks
CN111033523B (en) * 2017-09-21 2023-12-29 国际商业机器公司 Data enhancement for image classification tasks
CN109348865B (en) * 2018-11-13 2023-08-29 杭州电子科技大学 Cinnabar orange picking robot and picking method thereof
CN109348865A (en) * 2018-11-13 2019-02-19 杭州电子科技大学 A kind of vermillion orange picking robot and its picking method
CN111369938A (en) * 2020-04-26 2020-07-03 上海年润光电科技有限公司 Modularized LED display screen and control system thereof
CN112132969A (en) * 2020-09-01 2020-12-25 济南市房产测绘研究院(济南市房屋安全检测鉴定中心) Vehicle-mounted laser point cloud building target classification method
CN112132969B (en) * 2020-09-01 2023-10-10 济南市房产测绘研究院(济南市房屋安全检测鉴定中心) Vehicle-mounted laser point cloud building target classification method
CN112102280B (en) * 2020-09-11 2021-03-23 哈尔滨市科佳通用机电股份有限公司 Method for detecting loosening and loss faults of small part bearing key nut of railway wagon
CN112102280A (en) * 2020-09-11 2020-12-18 哈尔滨市科佳通用机电股份有限公司 Method for detecting loosening and loss faults of small part bearing key nut of railway wagon
CN113344874A (en) * 2021-06-04 2021-09-03 温州大学 Pedestrian boundary crossing detection method based on Gaussian mixture modeling
CN113344874B (en) * 2021-06-04 2024-02-09 温州大学 Pedestrian boundary crossing detection method based on Gaussian mixture modeling

Also Published As

Publication number Publication date
CN104318266B (en) 2017-06-13

Similar Documents

Publication Publication Date Title
CN104318266A (en) Image intelligent analysis processing early warning method
CN109657632B (en) Lane line detection and identification method
CN109918971B (en) Method and device for detecting number of people in monitoring video
CN102663354B (en) Face calibration method and system thereof
CN103763515B (en) A kind of video abnormality detection method based on machine learning
CN105023008A (en) Visual saliency and multiple characteristics-based pedestrian re-recognition method
CN109635758B (en) Intelligent building site video-based safety belt wearing detection method for aerial work personnel
CN110378179B (en) Subway ticket evasion behavior detection method and system based on infrared thermal imaging
CN103679677B (en) A kind of bimodulus image decision level fusion tracking updating mutually based on model
CN102214309B (en) Special human body recognition method based on head and shoulder model
CN103914708B (en) Food kind detection method based on machine vision and system
EP3036714B1 (en) Unstructured road boundary detection
CN107066972B (en) Natural scene Method for text detection based on multichannel extremal region
US20200250840A1 (en) Shadow detection method and system for surveillance video image, and shadow removing method
CN105205489A (en) License plate detection method based on color texture analyzer and machine learning
CN103093274B (en) Method based on the people counting of video
CN103035013A (en) Accurate moving shadow detection method based on multi-feature fusion
CN104715244A (en) Multi-viewing-angle face detection method based on skin color segmentation and machine learning
CN105718882A (en) Resolution adaptive feature extracting and fusing for pedestrian re-identification method
CN102915433A (en) Character combination-based license plate positioning and identifying method
CN104143077B (en) Pedestrian target search method and system based on image
CN110032932B (en) Human body posture identification method based on video processing and decision tree set threshold
CN105574515A (en) Pedestrian re-identification method in zero-lap vision field
CN112101260B (en) Method, device, equipment and storage medium for identifying safety belt of operator
CN104899559B (en) A kind of rapid pedestrian detection method based on video monitoring

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Ye Xiuzi

Inventor after: Liu Xun

Inventor after: Chen Ninghua

Inventor after: Hong Zhenjie

Inventor after: Zhang Sanyuan

Inventor before: Liu Xun

Inventor before: Chen Ninghua

Inventor before: Ye Xiuzi

Inventor before: Hong Zhenjie

Inventor before: Zhang Sanyuan

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20150128

Assignee: Big data and Information Technology Research Institute of Wenzhou University

Assignor: Wenzhou University

Contract record no.: X2020330000098

Denomination of invention: An early warning method for image intelligent analysis and processing

Granted publication date: 20170613

License type: Common License

Record date: 20201115

EE01 Entry into force of recordation of patent licensing contract
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170613

CF01 Termination of patent right due to non-payment of annual fee