Nothing Special   »   [go: up one dir, main page]

CN104637058A - Image information-based client flow volume identification statistic method - Google Patents

Image information-based client flow volume identification statistic method Download PDF

Info

Publication number
CN104637058A
CN104637058A CN201510063946.7A CN201510063946A CN104637058A CN 104637058 A CN104637058 A CN 104637058A CN 201510063946 A CN201510063946 A CN 201510063946A CN 104637058 A CN104637058 A CN 104637058A
Authority
CN
China
Prior art keywords
target
image
point
field picture
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510063946.7A
Other languages
Chinese (zh)
Other versions
CN104637058B (en
Inventor
方康玲
何鹏
付晓薇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Liangpinpu Supply Chain Technology Co Ltd
Original Assignee
Wuhan University of Science and Engineering WUSE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Science and Engineering WUSE filed Critical Wuhan University of Science and Engineering WUSE
Priority to CN201510063946.7A priority Critical patent/CN104637058B/en
Publication of CN104637058A publication Critical patent/CN104637058A/en
Application granted granted Critical
Publication of CN104637058B publication Critical patent/CN104637058B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

本发明涉及一种基于图像信息的客流量识别统计方法,该方法利用一种改进的自适应运动目标检测方法,在目标识别区域检测出有运动目标存在后,再对存在运动目标的图像序列按如下步骤处理:首先,基于Kalman滤波预测目标图像的搜索区域;然后,根据改进的代价函数对目标进行匹配,并建立目标链,确定目标图像间的对应关系;最后,根据每帧图像之间目标的关联匹配关系,确定目标是进入还是离开识别区域,或者静止、在目标识别区域内外连续运动,以此达到客流量识别统计的目的。本发明具有实时性强、识别率高、统计数据全面、硬件要求低等特点。

The invention relates to a method for identifying and counting passenger flow based on image information. The method utilizes an improved self-adaptive moving target detection method. The following steps are processed: first, predict the search area of the target image based on Kalman filtering; then, match the target according to the improved cost function, and establish a target chain to determine the corresponding relationship between the target images; finally, according to the target The association and matching relationship of the target can be determined to determine whether the target enters or leaves the recognition area, or is stationary and continuously moves inside and outside the target recognition area, so as to achieve the purpose of passenger flow recognition and statistics. The invention has the characteristics of strong real-time performance, high recognition rate, comprehensive statistical data, low hardware requirements and the like.

Description

一种基于图像信息的客流量识别统计方法A Method of Recognition and Statistics of Passenger Flow Based on Image Information

技术领域technical field

本发明涉及图像识别技术领域,尤其涉及一种基于图像信息的客流量识别统计方法。The invention relates to the technical field of image recognition, in particular to a method for identifying and counting passenger flow based on image information.

背景技术Background technique

客流量一直都是商场、机场、公交车站、地铁站等公共场所进行管理和决策不可或缺的重要数据,随着我国经济与科技的发展,许多行业对客流量统计的需求也日益见长。传统的人工统计方法费时费力,且时效性不强,后续数据处理繁琐,不能提供实时的统计数据,因此开发出一种实时的客流量智能统计系统具有十分重要的意义。Passenger flow has always been an indispensable and important data for management and decision-making in public places such as shopping malls, airports, bus stations, and subway stations. With the development of my country's economy and technology, the demand for passenger flow statistics in many industries is also growing. The traditional manual statistical method is time-consuming and laborious, and the timeliness is not strong. The follow-up data processing is cumbersome and cannot provide real-time statistical data. Therefore, it is of great significance to develop a real-time passenger flow intelligent statistical system.

传统的红外光电检测客流量统计系统原理简单,应用比较广泛,但有明显的缺陷:主动式受温度、光照等自然条件影响较大,可靠性较差;被动式可靠性较强,但价格昂贵,且受场地限制。基于图像信息的客流量识别统计具有成本低、安装维护简单等优点,还能与原有的监控系统相连接,不需另购设备。The traditional infrared photoelectric detection passenger flow statistics system is simple in principle and widely used, but has obvious defects: the active type is greatly affected by natural conditions such as temperature and light, and has poor reliability; the passive type has strong reliability, but is expensive. And limited by the venue. Passenger flow recognition statistics based on image information has the advantages of low cost, simple installation and maintenance, etc., and can also be connected with the original monitoring system without purchasing additional equipment.

该系统设计主要解决动态图像中多目标的识别与跟踪计数问题,目前已提出多种基于动态图像的客流量统计计数方法,具有代表性的主要有以下三种方法:The system design mainly solves the problem of multi-target recognition and tracking counting in dynamic images. At present, a variety of passenger flow statistics and counting methods based on dynamic images have been proposed. The representative methods are as follows:

(1)建立行人真实轨迹的数学统计模型。如结合利用背景建模方法和先验假设算法得到前景模块,再对轨迹模型滤波以排除非行人轨迹。该算法比直接检测行人节省了时间,但对轨迹数学模型要求较高,且依赖于前景块识别的准确度。(1) Establish a mathematical statistical model of the real trajectory of pedestrians. For example, the foreground module is obtained by combining the background modeling method and the prior assumption algorithm, and then the trajectory model is filtered to exclude non-pedestrian trajectories. This algorithm saves time compared to direct detection of pedestrians, but it has higher requirements on the trajectory mathematical model and depends on the accuracy of foreground block recognition.

(2)基于遗传算法。如“基于遗传算法的客流量分析系统研究”通过将遗传算法引入到了客流量的统计系统中提高了客流量的准确性,但是由于是通过人体识别,当客流量较大时遮挡面积的增多导致识别难度增大。(2) Based on genetic algorithm. For example, "Research on Passenger Flow Analysis System Based on Genetic Algorithm" improves the accuracy of passenger flow by introducing genetic algorithm into the statistical system of passenger flow. Recognition difficulty increases.

(3)数学形态学算法。如“基于改进特征跟踪的客流量统计”提出了一种基于数学形态学的客流量统计方法,首先对进行数学形态学处理后的目标连通域进行分析,采用向前搜索原则确定目标的新位置,通过划定计数线进行计数。该方法实现简单,但是容易与其它非人运动目标混淆,影响统计的准确性。(3) Mathematical morphology algorithm. For example, "Customer Flow Statistics Based on Improved Feature Tracking" proposes a mathematical morphology-based passenger flow statistics method. First, analyze the connected domain of the target after mathematical morphology processing, and use the forward search principle to determine the new location of the target. , count by delineating the count line. This method is simple to implement, but it is easy to be confused with other non-human moving objects, which affects the accuracy of statistics.

发明内容Contents of the invention

本发明要解决的技术问题在于针对现有技术中的缺陷,提供一种基于图像信息的客流量识别统计方法。The technical problem to be solved by the present invention is to provide a method for identifying and counting passenger flow based on image information in view of the defects in the prior art.

本发明解决其技术问题所采用的技术方案是:一种基于图像信息的客流量识别统计方法,包括以下步骤:The technical solution adopted by the present invention to solve the technical problem is: a method for identifying and counting passenger flow based on image information, comprising the following steps:

1)确定目标检测区域;设置目标图像检测区域ROI的四个参数:ROI.X代表检测区域ROI左上角点的x轴坐标,ROI.Y代表检测区域ROI左上角点的y轴坐标,ROI.Width代表检测区域ROI的宽度,ROI.Height代表检测区域ROI的高度;1) Determine the target detection area; set four parameters of the ROI of the target image detection area: ROI.X represents the x-axis coordinate of the upper left corner point of the detection area ROI, ROI.Y represents the y-axis coordinate of the upper left corner point of the detection area ROI, ROI. Width represents the width of the detection area ROI, and ROI.Height represents the height of the detection area ROI;

2)确定目标检测区域没有待检测目标存在时的背景图像;采用多幅图像平均法,对10~20帧原始图像叠加求取平均值,获取目标不存在时视频图像的背景图像,记为I02) Determine that the target detection area does not have the background image when the target to be detected exists; adopt the multi-image averaging method, calculate the average value for the superimposition of 10 to 20 frames of original images, and obtain the background image of the video image when the target does not exist, denoted as I 0 ;

3)对采集的视频图像的每帧图像进行处理,将存在运动目标图像帧按顺序存入数组中;3) Process each frame of the video image collected, and store the image frames of the moving target into the array in order;

将采集到的视频图像的第i帧图像进行分块处理,记为当前帧图像,然后选定目标图像检测区域ROI;Process the i-th frame image of the collected video image into blocks, record it as the current frame image, and then select the target image detection region ROI;

对当前帧图像进行二值化后、距离变换后,再通过均值滤波或亚取样,得到当前帧图像的增强图像;然后与经过1)~3)步处理过的第i-1帧图像进行差分,将选定的目标图像检测区域ROI的差值记为NZ;After binarization and distance transformation of the current frame image, the enhanced image of the current frame image is obtained through mean filtering or sub-sampling; , record the difference of the ROI of the selected target image detection region as NZ;

将运动检测差值NZ与设定的报警最小元素数量限定值N0作比较:若NZ<N0,则没有运动存在,自动采集第i+1帧原始图像,返回第3)步;若NZ≥N0,则保存当前帧图像,记为I1;Compare the motion detection difference NZ with the set alarm minimum element quantity limit value N0: if NZ<N0, then there is no motion, automatically collect the i+1 frame original image, and return to step 3); if NZ≥N0 , then save the current frame image, denoted as I1;

对当前帧图像和背景图像I0进行差分,将选定的目标图像检测区域ROI的差值记为NBDifferentiate the current frame image and the background image I 0 , and record the difference of the selected target image detection region ROI as N B ;

将目标检测差值NB与设定的报警最小元素数量限定值N0’作比较:若NB<N0’,则没有待检测目标存在;若NB≥N0’,则报警存在运动的目标,然后将存在运动目标的当前帧图像存入数组MovingImages[i][j]中;Compare the target detection difference N B with the set alarm minimum element number limit N 0 ': if N B <N 0 ', there is no target to be detected; if N B ≥ N 0 ', there is movement in the alarm target, and then store the current frame image with the moving target in the array MovingImages[i][j];

4)根据图像帧是否连续,对数组中的图像帧进行分块;4) according to whether the image frames are continuous, the image frames in the array are divided into blocks;

(1)将目标图像检测区域ROI的横坐标的最小值和最大值分别保存到对应的变量xin和xout中,然后读取数组MovingImages[i][j]中的第一帧原始图像序列,记为多目标跟踪统计计数的当前帧目标图像;(1) Save the minimum and maximum values of the abscissa of the target image detection region ROI into the corresponding variables xin and xout respectively, then read the original image sequence of the first frame in the array MovingImages[i][j], record The target image of the current frame counted for multi-target tracking statistics;

(2)定义计数变量PersonIn和PersonOut,分别记录进入和离开的目标数量,初始化设置为0,并规定沿X方向为进,逆X方向为出;(2) Define the counting variables PersonIn and PersonOut, respectively record the number of objects entering and leaving, and initialize it to 0, and stipulate that along the X direction is in, and against the X direction is out;

(3)对当前帧目标图像进行基于差分法的自适应背景分割,将获得的运动区域图像进行二值化,用(与头部大小相称的)2×2结构算子对目标图像的二值图像进行形态学腐蚀,以去除伪目标(即非头部图像),再通过形态学膨胀还原实际目标的大小,获得只含有运动目标连通域的二值图像;(3) Carry out adaptive background segmentation based on the difference method on the target image of the current frame, binarize the obtained moving region image, and use a 2×2 structure operator (commensurate with the size of the head) to binarize the target image The image is morphologically corroded to remove false targets (ie, non-head images), and then the size of the actual target is restored through morphological expansion to obtain a binary image that only contains the connected domain of the moving target;

(4)遍历只含有运动目标连通域的二值图像,获取各个目标连通域Area[n][i]的特征值,存入相应的链表中,其参数包括:图像索引Num,目标索引Index,目标质心的横坐标X和纵坐标Y,对应原图像目标区域的灰度平均值Gray,目标区域矩形包围窗口的长度Length和宽度Width,目标区域的面积大小Space;(4) Traverse the binary image that only contains the connected domain of the moving target, obtain the eigenvalues of each target connected domain Area[n][i], and store them in the corresponding linked list. The parameters include: image index Num, target index Index, The abscissa X and ordinate Y of the target centroid correspond to the gray average Gray of the target area of the original image, the Length and Width of the rectangular enclosing window of the target area, and the area size Space of the target area;

(5)根据获得的目标连通域Area[n][i]的特征值,用Kalman滤波来预测下一帧图像中目标连通域Area[n+1][i]的质心运动位置p(xi,yi)的搜索区域;(5) According to the obtained eigenvalues of the target connected domain Area[n][i], use Kalman filter to predict the centroid motion position p(xi, The search area of yi);

(6)采集下一帧图像,对其进行步骤(2)的操作;(6) gather next frame image, carry out the operation of step (2) to it;

(7)同步骤(3);(7) with step (3);

(8)根据改进的代价函数,在当前帧图像的Kalman预测搜素区域匹配下一帧图像各目标连通域,计算当前帧目标连通域Area[n][i]与下一帧图像对应搜索区域内所有目标连通域的代价函数值,并找出其中最小值(假设与下一帧目标Area[n+1][j]代价函数值最小);(8) According to the improved cost function, match the target connected domains of the next frame image in the Kalman prediction search area of the current frame image, and calculate the target connected domain Area[n][i] of the current frame and the corresponding search area of the next frame image The cost function values of all target connected domains in , and find the minimum value (assuming the minimum cost function value with the next frame target Area[n+1][j]);

(9)计算目标连通域Area[n][i]和Area[n+1][j]的质心距离d,并与质心距离限定值d0作比较。若d≤d0,则目标连通域Area[n+1][j]为目标连通域Area[n][i]的后续,用目标连通域Area[n+1][j]的特征值替代目标连通域Area[n][i]的值,并对目标连通域Area[n+1][j]做标记,建立目标链;(9) Calculate the centroid distance d of the target connected domain Area[n][i] and Area[n+1][j], and compare it with the centroid distance limit value d0. If d≤d0, the target connected domain Area[n+1][j] is the follow-up of the target connected domain Area[n][i], and the eigenvalue of the target connected domain Area[n+1][j] is used to replace the target The value of the connected domain Area[n][i], and mark the target connected domain Area[n+1][j] to establish the target chain;

(10)若d>d0,则目标连通域Area[n][i]在下一帧中没有后续,目标连通域Area[n][i]可能离开图像观测窗口或者暂时静止,需判断目标连通域Area[n][i]的质心坐标X的位置:(10) If d>d0, the target connected domain Area[n][i] has no follow-up in the next frame, and the target connected domain Area[n][i] may leave the image observation window or be temporarily static, and the target connected domain needs to be judged The position of the centroid coordinate X of Area[n][i]:

5)对分块后的图像帧进行统计计数;5) counting the image frames after the block;

具体如下:details as follows:

5.1)将目标图像检测区域ROI的横坐标的最小值和最大值分别保存到对应的变量xin和xout中,然后读取数组MovingImages[i][j]中的第一帧原始图像序列,记为多目标跟踪统计计数的当前帧目标图像;5.1) Save the minimum and maximum values of the abscissa of the target image detection region ROI into the corresponding variables xin and xout respectively, and then read the first frame of the original image sequence in the array MovingImages[i][j], which is recorded as The current frame target image of multi-target tracking statistics;

5.2)定义计数变量PersonIn和PersonOut,分别记录进入和离开的目标数量,初始化设置为0,并规定沿X轴正方向为进,逆X轴正方向为出;5.2) Define the counting variables PersonIn and PersonOut, respectively record the number of objects entering and leaving, and initialize it to 0, and stipulate that along the positive direction of the X axis, it is in, and against the positive direction of the X axis, it is out;

5.3)对目标连通域Area[n][i]的质心坐标X的位置:5.3) The position of the centroid coordinate X of the target connected domain Area[n][i]:

当X≤xin时,如果目标连通域Area[n][i]的运动轨迹的第一个质心横坐标X1≤xin,则说明该目标一直在预留区内徘徊,任何计数器不动作;如果X1>xin,则说明目标离开跟踪窗口,方向为出,出计数器PersonOut加1,并清除目标连通域Area[n][i]的目标链;When X≤xin, if the abscissa of the first centroid X1≤xin of the movement trajectory of the target connected domain Area[n][i], it means that the target has been wandering in the reserved area, and any counter does not act; if X1 >xin, it means that the target leaves the tracking window, the direction is out, the out counter PersonOut is increased by 1, and the target chain of the target connected domain Area[n][i] is cleared;

当X>xout时,如果目标连通域Area[n][i]的运动轨迹第一个质心坐标横坐标X1≥xout,则说明该目标一直在预留区内徘徊,任何计数器不动作;如果X1<xout,则说明目标进入跟踪窗口,方向为进,进计数器PersonIn加1,并清除目标连通域Area[n][i]的目标链;When X>xout, if the abscissa coordinate X1≥xout of the first centroid coordinate of the movement track of the target connected domain Area[n][i], it means that the target has been wandering in the reserved area, and any counter does not act; if X1 <xout, it means that the target enters the tracking window, the direction is forward, the forward counter PersonIn is increased by 1, and the target chain of the target connected domain Area[n][i] is cleared;

当xin<X<xout,则说明目标连通域Area[n][i]在检测区内运动,其最后的运动方向不明,保留其特征值,等待下一帧中该目标的跟踪;若下一帧中有匹配目标,则按上述步骤建立目标链,否则作为干扰,丢弃;When xin<X<xout, it means that the target connected domain Area[n][i] is moving in the detection area, and its final movement direction is unknown, so keep its eigenvalues and wait for the tracking of the target in the next frame; if the next If there is a matching target in the frame, establish the target chain according to the above steps, otherwise it will be discarded as interference;

5.4)跟踪窗口中的所有目标连通域进行匹配后,验证当前帧中所有目标是否被跟踪;若存在未被跟踪的目标,则判断其质心横坐标是否满足X≤xin和X>xout中的任意一个,满足则表示有新目标出现,为其建立新的目标链,获取并记录特征值,转入步骤4中的(3);不满足则判断为干扰,丢弃;5.4) After matching all target connected domains in the tracking window, verify whether all targets in the current frame are tracked; if there is an untracked target, judge whether the abscissa of its centroid satisfies any of X≤xin and X>xout One, if it is satisfied, it means that there is a new target, establish a new target chain for it, obtain and record the characteristic value, and turn to (3) in step 4; if it is not satisfied, it will be judged as interference and discarded;

5.5)当前帧图像中所有目标识别计数结束后,计算当前进计数器PersonIn和出计数器的差值PersonOut的差值,记为当前客流量CustomCounting,当前帧图像的目标识别计数结束;5.5) After all the target recognition counts in the current frame image are finished, calculate the difference between the current advance counter PersonIn and the difference value PersonOut of the counter, which is recorded as the current passenger flow CustomCounting, and the target recognition count of the current frame image ends;

5.6)依次对图像数组MovingImages[i][j]中每帧图像的目标连通域进行识别和自动计数,直到图像数组MovingImages[i][j]中的图像序列全部处理完毕,则整个目标识别计数结束。5.6) Identify and automatically count the target connected domains of each frame image in the image array MovingImages[i][j] in turn, until the image sequences in the image array MovingImages[i][j] are all processed, then the entire target recognition count Finish.

按上述方案,所述采用目标链代价函数对多目标进行跟踪匹配,步骤是:According to the above scheme, the described adopting target chain cost function to track and match multiple targets, the steps are:

1)获取第k帧图像中多个目标的三个特征值:目标质心坐标、平均灰度和目标区域的面积,其中目标i的特征值分别记为:Point[k][i]、Gray[k][i]和Space[k][i];1) Obtain three eigenvalues of multiple targets in the k-th frame image: the coordinates of the center of mass of the target, the average gray level and the area of the target area, where the eigenvalues of the target i are respectively recorded as: Point[k][i], Gray[ k][i] and Space[k][i];

2)利用Kalman滤波预测第k帧图像中目标i在第k+1帧图像中的搜索区域,遍历目标i在第k+1帧图像中的搜索区域,发现n个目标对象,获取他们的特征值,其中目标j的特征值分别记为:Point[k+1][j]、Gray[k+1][j]和Space[k+1][j];2) Use the Kalman filter to predict the search area of the target i in the k+1 frame image in the k-th frame image, traverse the search area of the target i in the k+1 frame image, find n target objects, and obtain their characteristics value, where the eigenvalues of target j are respectively recorded as: Point[k+1][j], Gray[k+1][j] and Space[k+1][j];

3)分别求取第t+1帧图像目标i的搜索区域中n个目标对象与第t帧图像中目标i的3个特征值的变化程度,求取最大值分别记为MaxPoint、MaxGray和MaxSpace,具体计算公式如下:3) Calculate the degree of change between the n target objects in the search area of the target i in the t+1 frame image and the three eigenvalues of the target i in the t frame image, and obtain the maximum values respectively as MaxPoint, MaxGray and MaxSpace , the specific calculation formula is as follows:

MaxPoint[i]=Max(sqrt((Point[k][i].x-Point[k+1][m].x)*(Point[k][i].x-Point[k+1][m].x)+(Point[k][i].y-Point[k+1][m].y)*(Point[k][i].y-Point[k+1][m].y)))MaxPoint[i]=Max(sqrt((Point[k][i].x-Point[k+1][m].x)*(Point[k][i].x-Point[k+1] [m].x)+(Point[k][i].y-Point[k+1][m].y)*(Point[k][i].y-Point[k+1][m ].y)))

MaxGray[i]=Max(sqrt((Gray[k][i]-Gray[k+1][m])*(Gray[k][i]-Gray[k+1][m])))MaxGray[i]=Max(sqrt((Gray[k][i]-Gray[k+1][m])*(Gray[k][i]-Gray[k+1][m])))

MaxSpace[i]=Max(sqrt((Space[k][i]-Space[k+1][m])*(Space[k][i]-Space[k+1][m])))MaxSpace[i]=Max(sqrt((Space[k][i]-Space[k+1][m])*(Space[k][i]-Space[k+1][m])))

其中,1≤m≤n,表示目标i的搜索区域中n个目标中任意一个;其中sqrt为开平方根运算;Among them, 1≤m≤n, which means any one of n targets in the search area of target i; where sqrt is the square root operation;

4)求取第t帧图像中目标i和第t+1帧图像中目标j的质心距离大小D[i][j]、灰度变化程度H[i][j]和面积变化程度S[i][j],相应的计算公式如下:4) Calculate the centroid distance D[i][j], the degree of grayscale change H[i][j] and the degree of area change S[ i][j], the corresponding calculation formula is as follows:

D[i][j]=sqrt((Point[k][i].x-Point[k+1][j].x)*(Point[k][i].x-Point[k+1][j].x)+(Point[k][i].y-Point[k+1][j].y)*(Point[k][i].y-Point[k+1][j].y))/MaxPoint[i];D[i][j]=sqrt((Point[k][i].x-Point[k+1][j].x)*(Point[k][i].x-Point[k+1 ][j].x)+(Point[k][i].y-Point[k+1][j].y)*(Point[k][i].y-Point[k+1][ j].y))/MaxPoint[i];

H[i][j]=sqrt((Gray[k][i]-Gray[k+1][j])*(Gray[k][i]-Gray[k+1][j]))/MaxGray[i];H[i][j]=sqrt((Gray[k][i]-Gray[k+1][j])*(Gray[k][i]-Gray[k+1][j])) /MaxGray[i];

S[i][j]=sqrt((Space[k][i]-Space[k+1][j])*(Space[k][i]-Space[k+1][j]))/MaxSpace[i]S[i][j]=sqrt((Space[k][i]-Space[k+1][j])*(Space[k][i]-Space[k+1][j])) /MaxSpace[i]

5)代价函数V[i][j]可由步骤4.5.4)所求三个特征值变化量求得:5) The cost function V[i][j] can be obtained by the three eigenvalue changes obtained in step 4.5.4):

V[i][j]=αD[i][j]+βH[i][j]+γS[i][j];V[i][j]=αD[i][j]+βH[i][j]+γS[i][j];

其中α、β、γ三个参数系数表示三个特征值的影响因子,可按照实际情况调整。Among them, the three parameter coefficients of α, β, and γ represent the influencing factors of the three eigenvalues, which can be adjusted according to the actual situation.

按上述方案,所述采用Kalman滤波预测搜索区域的具体步骤是:According to the above scheme, the specific steps of using the Kalman filter to predict the search area are:

1)假设图像中各目标质心的运动为匀速运动,包围窗口面积基本不变,初始化目标在x方向的运动速度Vx,在y方向上的运动速度Vy,包围窗口长宽的变化速度Vt,图像采样步长T(即每帧图像序列之间的时间间隔);1) Assuming that the center of mass of each target in the image moves at a uniform speed, and the area of the surrounding window is basically unchanged, initialize the moving speed V x of the target in the x direction, the moving speed V y in the y direction, and the changing speed Vt of the length and width of the surrounding window , the image sampling step size T (that is, the time interval between each frame of image sequences);

2)基于第k-1帧图像中的目标i,获取第k帧图像中对应目标i的特征值的最优估算结果:质心位置X坐标Point[k][i].x和Y坐标Point[k][i].y、包围窗口的长度Win[k][i].length和宽度Win[k][i].width、两特征量的偏差估计P[k][i];2) Based on the target i in the k-1th frame image, obtain the optimal estimation result of the feature value corresponding to the target i in the k-th frame image: centroid position X coordinate Point[k][i]. x and Y coordinate Point[ k][i].y, the length Win[k][i].length and width Win[k][i].width of the surrounding window, and the deviation estimate P[k][i] of the two feature quantities;

3)基于第2)步获得的结果,计算第k+1帧图像中目标i的特征值预测值和偏差估计,具体的计算公式如下:3) Based on the results obtained in step 2), calculate the predicted value of the feature value and the deviation estimate of the target i in the k+1th frame image. The specific calculation formula is as follows:

Point[k+1][i].x=Point[k][i].x+Vx*T;Point[k+1][i].x=Point[k][i].x+V x *T;

Point[k+1][i].y=Point[k][i].y+Vy*T;Point[k+1][i].y=Point[k][i].y+V y *T;

Win[k+1][i].length=Win[k][i].length+Vt*T;Win[k+1][i].length=Win[k][i].length+Vt*T;

Win[k+1][i].width=Win[k][i].width+Vt*T;Win[k+1][i].width=Win[k][i].width+Vt*T;

P[k+1][i]=P[k][i]+Q;P[k+1][i]=P[k][i]+Q;

4)根据第k+1帧图像中目标i的特征值预测值和偏差估计,计算对应的Kalman增益:4) Calculate the corresponding Kalman gain according to the eigenvalue prediction value and bias estimation of the target i in the k+1th frame image:

Kg[k+1][i]=P[k][i]/(P[k][i]+R)Kg[k+1][i]=P[k][i]/(P[k][i]+R)

计算第k+1帧图像中目标i的各个特征值和偏差估计的最优估计值,并更新他们的预测值:Calculate the optimal estimated value of each eigenvalue and bias estimate of the target i in the k+1th frame image, and update their predicted values:

Point[k+1][i].x=Point[k+1][i].x+Kg[k+1][i]*(Z[k+1][i].x-Point[k+1][i].x);Point[k+1][i].y=Point[k+1][i].y+Kg[k+1][i]*(Z[k+1][i].y-Point[k+1][i].y);Point[k+1][i].x=Point[k+1][i].x+Kg[k+1][i]*(Z[k+1][i].x-Point[k +1][i].x); Point[k+1][i].y=Point[k+1][i].y+Kg[k+1][i]*(Z[k+1 ][i].y-Point[k+1][i].y);

Win[k+1][i].length=Win[k+1][i].length+Kg[k+1][i]*(Z[k+1][i].length-Win[k+1][i].length);Win[k+1][i].length=Win[k+1][i].length+Kg[k+1][i]*(Z[k+1][i].length-Win[k +1][i].length);

Win[k+1][i].width=Win[k+1][i].width+Kg[k+1][i]*(Z[k+1][i].width-Win[k+1][i].width);Win[k+1][i].width=Win[k+1][i].width+Kg[k+1][i]*(Z[k+1][i].width-Win[k +1][i].width);

P[k+1][i]=(1-Kg[K+1][i])*P[k][i];P[k+1][i]=(1-Kg[K+1][i])*P[k][i];

5)由此可得第k帧图像中目标i在第k+1帧中的预测搜索区域(X,Y)为:5) From this, it can be obtained that the predicted search area (X, Y) of the target i in the k+1th frame in the image of the kth frame is:

Point[k+1][i].x-1.5*Win[k+1][i].length/2≤X≤Point[k+1][i].x+1.5*Win[k+1][i].length/2;Point[k+1][i].x-1.5*Win[k+1][i].length/2≤X≤Point[k+1][i].x+1.5*Win[k+1] [i].length/2;

Point[k+1][i].y-1.5*Win[k+1][i].width/2≤Y≤Point[k+1][i].y+1.5*Win[k+1][i].width/2。Point[k+1][i].y-1.5*Win[k+1][i].width/2≤Y≤Point[k+1][i].y+1.5*Win[k+1] [i].width/2.

按上述方案,所述步骤4的(8)中目标链中目标的关联匹配和数据关联的算法的实现步骤是:According to the above-mentioned scheme, the realization step of the algorithm of the association matching of target and data association in the target chain in (8) of described step 4 is:

在目标链的关联匹配中,若出现第k帧图像中目标i与第k+1帧图像中目标i的预测搜索区域中第j个目标和第m个目标的代价函数值相等,且同时为最小,则此时目标的跟踪匹配会发生冲突,则采用以下改进算法,完成目标的正确跟踪匹配;In the association matching of the target chain, if the cost function values of the jth target and the mth target in the prediction search area of the target i in the kth frame image and the target i in the k+1th frame image are equal, and at the same time is the smallest, then the tracking and matching of the target will conflict at this time, and the following improved algorithm is used to complete the correct tracking and matching of the target;

计算第k帧图像中目标i和第k+1帧图像中目标j和m的代价函数,若代价函数值相等且为最小,则将上述3个目标的灰度特征值分别记为Gray[k][i]、Gray[k+1][j]、Gray[k+1][m];Calculate the cost function of the target i in the kth frame image and the target j and m in the k+1th frame image. If the cost function value is equal and the smallest, then record the gray feature values of the above three targets as Gray[k ][i], Gray[k+1][j], Gray[k+1][m];

分别求取第k帧图像中目标i和第k+1帧图像中目标j、m的平均灰度变化大小,计算公式为:Respectively calculate the average gray scale change of target i in the kth frame image and target j, m in the k+1th frame image, the calculation formula is:

Value1=sqrt((Gray[k][i]-Gray[k+1][j])*(Gray[k][i]-Gray[k+1][j]));Value1=sqrt((Gray[k][i]-Gray[k+1][j])*(Gray[k][i]-Gray[k+1][j]));

Value2=sqrt((Gray[k][i]-Gray[k+1][m])*(Gray[k][i]-Gray[k+1][m]));Value2=sqrt((Gray[k][i]-Gray[k+1][m])*(Gray[k][i]-Gray[k+1][m]));

若Value1<Value2,则第k+1帧图像中的目标j为第k帧图像中目标i的后续,用第k+1帧图像中的目标j的特征值代替第k帧图像中目标i的特征值,更新目标链,反之,则第k+1帧图像中的目标m为第k帧图像中目标i的后续;If Value1<Value2, the target j in the k+1 frame image is the follow-up of the target i in the k frame image, and the feature value of the target j in the k+1 frame image is used to replace the target i in the k frame image Eigenvalue, update the target chain, otherwise, the target m in the k+1th frame image is the follow-up of the target i in the kth frame image;

与第k帧图像中的目标i没有匹配成功的可与第k帧图像中的其他目标匹配,或者保留等待下一帧中找到匹配目标。If there is no match with the target i in the image of the kth frame, it can be matched with other targets in the image of the kth frame, or it can be reserved and wait for the matching target to be found in the next frame.

本发明产生的有益效果是:本发明提供的基于图像信息的客流量识别统计方法为此类多目标识别与跟踪计数问题,提供了一种可供参考的解决途径,该方法实时性强、鲁棒性好、智能性高、稳定性强。The beneficial effects produced by the present invention are: the passenger flow identification and statistics method based on image information provided by the present invention provides a solution for this kind of multi-target identification and tracking and counting problems for reference. Good stickiness, high intelligence and strong stability.

附图说明Description of drawings

下面将结合附图及实施例对本发明作进一步说明,附图中:The present invention will be further described below in conjunction with accompanying drawing and embodiment, in the accompanying drawing:

图1为图像采集现场硬件安装示意图;Figure 1 is a schematic diagram of hardware installation at the image acquisition site;

图2为本发明某次实验的现场背景图像及检测区域ROI设置;Fig. 2 is the field background image and detection area ROI setting of a certain experiment of the present invention;

图3为图1背景下目标进入检测区域图像;Fig. 3 is the image of the target entering the detection area under the background of Fig. 1;

图4为运动目标存在检测阶段的差分图像;Fig. 4 is the difference image of moving target existence detection stage;

图5为人头部目标识别和特征提取结果图像;Fig. 5 is the result image of human head target recognition and feature extraction;

图6为人头部目标识别结果在原图中显示;Figure 6 shows the results of human head target recognition displayed in the original image;

图7为一组动态图像的目标特征量表;Fig. 7 is the target feature scale of a group of dynamic images;

图8为图6中该类多目标的质心运动轨迹;Fig. 8 is the center-of-mass trajectory of this type of multi-target in Fig. 6;

图9为图6中该类多目标建立的跟踪窗口;Fig. 9 is the tracking window established by this type of multi-target in Fig. 6;

图10为本发明的方法流程图。Fig. 10 is a flow chart of the method of the present invention.

具体实施方式Detailed ways

为了使本发明的目的、技术方案及优点更加清楚明白,以下结合实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。In order to make the object, technical solution and advantages of the present invention more clear, the present invention will be further described in detail below in conjunction with the examples. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

如图10所示,一种基于图像信息的客流量识别统计方法。As shown in Figure 10, a method for identifying and counting passenger flow based on image information.

某商场入口处1的顶端安装摄像头2,PC主机通过图像采集卡3与摄像头2进行通信,将采集的图像信息经处理后在显示器5上显示,利用人顶部的图像识别和统计人员的流量,现场硬件安装示意图如图1所示。A camera 2 is installed on the top of the entrance 1 of a shopping mall, and the PC host communicates with the camera 2 through the image acquisition card 3, and displays the collected image information on the monitor 5 after processing, and uses the image recognition on the top of the person and the flow of the statistics personnel, The schematic diagram of on-site hardware installation is shown in Figure 1.

第一步、运动目标的检测识别The first step, detection and recognition of moving targets

实验采用的图像分辨率为600×560。为了减少图像处理量,对图像进行分块处理,图像的大小为150×140。摄像头采集图像的速度为:4帧/s。具体步骤如下:The image resolution used in the experiment is 600×560. In order to reduce the amount of image processing, the image is divided into blocks, and the size of the image is 150×140. The camera captures images at a speed of 4 frames/s. Specific steps are as follows:

1)设置目标图像检测ROI区域的四个参数:ROI.X代表ROI区域左上角点x轴坐标,ROI.Y代表ROI区域左上角点y轴坐标,ROI.Width代表ROI区域宽度,ROI.Height代表ROI区域高度。本例中,设置:ROI.X=40,ROI.Y=50,ROI.Width=50,ROI.Height=60。1) Set the four parameters of the target image detection ROI area: ROI.X represents the x-axis coordinate of the upper left corner of the ROI area, ROI.Y represents the y-axis coordinate of the upper left corner of the ROI area, ROI.Width represents the width of the ROI area, and ROI.Height Represents the height of the ROI area. In this example, set: ROI.X=40, ROI.Y=50, ROI.Width=50, ROI.Height=60.

2)采用多幅图像平均法,对M帧图像叠加求取平均值,获取某段视频图像在没有目标存在时的背景图像,如图2。2) Using the multi-image averaging method, the M frame images are superimposed to calculate the average value, and the background image of a certain video image when there is no target is obtained, as shown in Figure 2.

3)将采集到的该视频图像的第i帧图像进行分块处理,记为当前帧参数确定图像,并选定目标图像检测ROI区域,如图3所示。3) Process the captured i-th frame of the video image into blocks, record it as the current frame parameter determination image, and select the target image to detect the ROI area, as shown in Figure 3.

4)对当前帧图像进行二值化后,再通过距离变换和均值滤波或亚取样,得到当前帧图像的增强图像;然后与经过1)~3)步处理过的第i-1帧图像进行差分,将选定ROI区域的差值记为NZ4) After the current frame image is binarized, and then through distance transformation and mean filtering or sub-sampling, the enhanced image of the current frame image is obtained; Difference, the difference of the selected ROI area is recorded as N Z .

5)将NZ与设定的报警最小元素数量限定值N0作比较:若NZ<N0,则没有运动存在,自动采集i+1帧图像,返回第2)步重新开始;若NZ≥N0,则保存第i帧图像,记为I15) Compare N Z with the set limit value N 0 of the minimum number of elements for alarm: if N Z <N 0 , then there is no motion, automatically collect i+1 frame images, return to step 2) and start again; if N Z ≥ N 0 , then save the i-th frame image, denoted as I 1 .

6)对第i帧图像I1和背景图像I0进行基于差分法的自适应背景分割,再将得到的图像二值化,并进行形态学腐蚀,将选定ROI区域的差值记为NB,差值图像如图4所示。6) Perform adaptive background segmentation based on the difference method on the i-th frame image I 1 and the background image I 0 , then binarize the obtained image, and perform morphological erosion, and record the difference of the selected ROI area as N B , the difference image is shown in Figure 4.

7)将NB与设定的报警最小元素数量限定值N′0作比较:若NB<N′0,则没有待检测目标存在,自动采集i+1帧图像,返回第2)步重新开始;若NB≥N′0,则报警存在运动的目标,并将该图像帧存入数组MovingImages[i][j]中,以准备做后续的图像识别与处理。7) Compare N B with the set limit value N′ 0 of the minimum number of elements for alarm: if N B <N′ 0 , then there is no target to be detected, automatically collect i+1 frame images, and return to step 2) to start again Start; if N B ≥ N′ 0 , alarm that there is a moving target, and store the image frame in the array MovingImages[i][j] for subsequent image recognition and processing.

第二步、多目标的跟踪与统计计数The second step, multi-target tracking and statistical counting

根据读取的视频图像,设定好ROI检测区域,将其横坐标的最小和最大值保存到变量xin和xout中,并选择有运动目标出现的连续图像帧,依次保存在数组MovingImages[i][j]中,读取该数组中的图像序列的第一帧目标图像,将其记为多目标跟踪统计计数的当前帧图像,然后按照以下步骤根据人头顶的视频图像进行客流量的在线识别计数,将计数结果存储到计数变量PersonIn和PersonOut中,分别记录进入和离开的目标数量,计数变量初始值设为0。本例中,设置xin=40,xout=100。According to the read video image, set the ROI detection area, save the minimum and maximum values of its abscissa to the variables xin and xout, and select consecutive image frames with moving targets, and store them in the array MovingImages[i] in turn In [j], read the first frame target image of the image sequence in the array, record it as the current frame image of multi-target tracking statistics, and then perform online recognition of passenger flow according to the video image above the head of the person according to the following steps Counting, store the counting results in the counting variables PersonIn and PersonOut, respectively record the number of targets entering and leaving, and the initial value of the counting variable is set to 0. In this example, set xin=40 and xout=100.

1)启动进出两个计数器,并规定沿x正方向为进,x反方向为出。1) Start two entry and exit counters, and stipulate that the positive direction of x is the entry, and the opposite direction of x is the exit.

2)对获得的当前帧目标图像二值化后,选用合适的结构算子对其进行形态学腐蚀,以去除伪目标(即非头部图像),取得只含有运动目标连通区域的二值图像。2) After binarizing the target image in the current frame, select a suitable structure operator to perform morphological erosion on it to remove false targets (ie, non-head images), and obtain a binary image that only contains connected regions of moving targets .

3)遍历该二值图像,获取各个目标连通域Area[n][i]的特征值,并存入相应的链表中,其参数包括:图像和目标索引Num和Index,目标质心的横坐标X,纵坐标Y,对应原图像目标区域的灰度平均值Gray,目标区域矩形包围窗口的长度Length和宽度Width,目标区域的面积大小Space,如图4所示,两个目标的质心坐标分别为:(83.8530,90.2874)、(118.8503,33.7245);平均灰度分别为:85.9029、77.204;目标包围窗口分别为:30*29、34*27;目标区域面积大小分别为:762、755。3) Traverse the binary image, obtain the eigenvalues of each target connected domain Area[n][i], and store them in the corresponding linked list. The parameters include: image and target index Num and Index, and the abscissa X of the target centroid , the ordinate Y corresponds to the gray average value Gray of the target area of the original image, the length Length and width Width of the rectangular enclosing window of the target area, and the area size Space of the target area, as shown in Figure 4, the centroid coordinates of the two targets are respectively : (83.8530, 90.2874), (118.8503, 33.7245); the average gray levels are: 85.9029, 77.204; the target enclosing windows are: 30*29, 34*27; the target area sizes are: 762, 755.

4)根据获得的目标Area[n][i]特征值,基于Kalman滤波预测下一帧图像中目标连通域Area[n+1][i]的质心运动位置p(xi,yi)和区域。4) According to the obtained feature value of the target Area[n][i], predict the centroid motion position p(xi, yi) and area of the target connected domain Area[n+1][i] in the next frame image based on Kalman filter.

5)采集下一帧图像,对其进行步骤1)的操作。5) Collect the next frame of image, and perform the operation of step 1) on it.

6)同第二步的步骤2)。6) Same as step 2) of the second step.

7)根据改进的代价函数,在当前帧图像的Kalman预测搜素区域匹配下一帧图像各目标连通域,计算当前帧目标Area[n][i]与下一帧图像对应搜索区域内所有目标的代价函数值,并找出其中最小值(假设与下一帧目标Area[n+1][j]代价函数值最小)。7) According to the improved cost function, the Kalman prediction search area of the current frame image is matched with each target connected domain of the next frame image, and all targets in the search area corresponding to the current frame target Area[n][i] and the next frame image are calculated The cost function value of , and find the minimum value (assuming that the value of the cost function is the smallest with the target Area[n+1][j] of the next frame).

8)计算目标Area[n][i]和Area[n+1][j]的质心距离d,并与距离限定值d0作比较。若d≤d0,则目标Area[n+1][j]为目标Area[n][i]的后续,用目标Area[n+1][j]的特征值替代目标Area[n][i]的值,并对该目标Area[n+1][j]做标记,建立目标链。8) Calculate the centroid distance d of the target Area[n][i] and Area[n+1][j], and compare it with the distance limit value d0. If d≤d0, the target Area[n+1][j] is the follow-up of the target Area[n][i], and the target Area[n][i] is replaced by the eigenvalue of the target Area[n+1][j] ], and mark the target Area[n+1][j] to establish a target chain.

9)若d>d0,则目标Area[n][i]在下一帧中没有后续目标,该目标可能离开图像观测窗口或者暂时静止,需判断目标Area[n][i]的质心坐标:9) If d>d0, the target Area[n][i] has no follow-up target in the next frame, and the target may leave the image observation window or be temporarily stationary. It is necessary to judge the coordinates of the center of mass of the target Area[n][i]:

当X≤xin时,如果该目标Area[n][i]运动轨迹的第一个质心横坐标X1≤xin,则说明该目标一直在预留区内徘徊,任何计数器不计数;如果X1>xin,则说明目标离开跟踪窗口,方向为出,出计数器PersonOut加1,并清除目标链Area[n][i]。When X≤xin, if the abscissa of the first center of mass of the target Area[n][i] trajectory X1≤xin, it means that the target has been wandering in the reserved area, and any counter does not count; if X1>xin , it means that the target leaves the tracking window, the direction is out, the out counter PersonOut is incremented by 1, and the target chain Area[n][i] is cleared.

当X>xout时,如果该运动目标轨迹第一个质心坐标横坐标X1≥xout,任何计数器不计数;如果X1<xout,则说明该目标进入跟踪窗口,方向为进,进计数器PersonIn加1,并清除目标链Area[n][i];When X>xout, if the abscissa X1≥xout of the first center of mass coordinate of the moving target track, any counter will not count; if X1<xout, it means that the target enters the tracking window, the direction is forward, and the forward counter PersonIn is incremented by 1, And clear the target chain Area[n][i];

当xin<X<xout,则说明目标Area[n][i]在检测区内运动,其最后运动的方向不明,保留其特征值,等待下一帧中该目标的跟踪。若下一帧中有匹配目标,则按上述步骤建立目标链,否则作为干扰,丢弃。When xin<X<xout, it means that the target Area[n][i] is moving in the detection area, and the direction of its final movement is unknown, so keep its characteristic value and wait for the tracking of the target in the next frame. If there is a matching target in the next frame, the target chain will be established according to the above steps, otherwise it will be discarded as interference.

10)所有被跟踪目标进行匹配后,验证当前帧中所有目标是否被跟踪。若存在未被跟踪的目标,则判断其质心横坐标是否满足:X≤xin或者X>xout,满足则是有新目标出现,为其建立新的目标链,设置特征值;不满足则可能为干扰,丢弃,如图5所示。本例中,两目标在下一帧中匹配目标的特征值:质心坐标分别为:(80.7355,91.3539)、(116.1797,34.5599);平均灰度分别为:86.9632、77.7487;目标包围窗口分别为:31*29、35*28;目标区域面积大小分别为:760、768。10) After all the tracked targets are matched, verify whether all targets in the current frame are tracked. If there is an untracked target, judge whether the abscissa of its centroid satisfies: X≤xin or X>xout, if it is satisfied, a new target appears, establish a new target chain for it, and set the characteristic value; if it is not satisfied, it may be Interference, discard, as shown in Figure 5. In this example, the two targets match the feature values of the target in the next frame: the centroid coordinates are: (80.7355, 91.3539), (116.1797, 34.5599); the average gray levels are: 86.9632, 77.7487; the target enclosing windows are: 31 *29, 35*28; the size of the target area is: 760, 768 respectively.

11)每帧图像中所有目标识别计数结束后,即可计算当前进计数器PersonIn和出计数器的差值PersonOut的差值,赋给变量CustomCounting,即为当前客流量的统计结果。本例中当前帧的PersonIn=2,PersonOut=0,CustomCounting=2-0=1。11) After all target recognition counts in each frame of image are completed, the difference between the current forward counter PersonIn and the outgoing counter PersonOut can be calculated and assigned to the variable CustomCounting, which is the statistical result of the current passenger flow. In this example, PersonIn=2, PersonOut=0, CustomCounting=2-0=1 of the current frame.

识别匹配结果在原图中显示,如图6所示,当前帧图像的目标识别计数结束。The recognition and matching results are displayed in the original image, as shown in Figure 6, and the target recognition count of the current frame image is over.

按照步骤1)~10)依次对每帧的目标进行识别和自动计数,直到该图像数组MovingImages[i][j]中的图像全部处理完毕,将各时刻两个计数器PersonIn和PersonOut的值及其差值CustomCounting存入数据库,即可对客流量进行统计分析。本例中,最终客流量统计值为CustomCounting=572,所用的一段视频中,识别计数过程中某两目标在检测区域各帧图像中的特征量如图7所示,根据识别得到的数据重现该两目标质心和目标包围窗口运动轨迹分别如图8和图9。Follow steps 1) to 10) to identify and automatically count the targets in each frame in turn until all the images in the image array MovingImages[i][j] are processed, and the values of the two counters PersonIn and PersonOut at each moment and their The difference CustomCounting is stored in the database, and the passenger flow can be statistically analyzed. In this example, the final passenger flow statistic value is CustomCounting=572. In a piece of video used, the feature quantities of two targets in each frame image of the detection area during the recognition and counting process are shown in Figure 7, which is reproduced according to the data obtained from the recognition The motion trajectories of the two target centroids and target enclosing windows are shown in Fig. 8 and Fig. 9 respectively.

应当理解的是,对本领域普通技术人员来说,可以根据上述说明加以改进或变换,而所有这些改进和变换都应属于本发明所附权利要求的保护范围。It should be understood that those skilled in the art can make improvements or changes based on the above description, and all these improvements and changes should belong to the protection scope of the appended claims of the present invention.

Claims (9)

1., based on a volume of the flow of passengers identification statistical method for image information, it is characterized in that, comprise the following steps:
1) object detection area is determined; Four parameter: ROI.X of Offered target image detection region ROI represent the x-axis coordinate of surveyed area ROI upper left angle point, ROI.Y represents the y-axis coordinate of surveyed area ROI upper left angle point, ROI.Width represents the width of surveyed area ROI, and ROI.Height represents the height of surveyed area ROI;
2) background image when object detection area does not have a target to be detected is determined;
3) every two field picture of the video image gathered is processed, will the picture frame of moving target be there is in order stored in array;
4) judge that whether the picture frame in array is continuous, according to judged result, piecemeal is carried out to the picture frame in array;
5) statistical counting is carried out to the picture frame after piecemeal, determine the volume of the flow of passengers passed in and out.
2. volume of the flow of passengers identification statistical method according to claim 1, it is characterized in that, described step 2) in determine that the method for background image is: adopt the multiple image method of average, to multiframe original image superposition averaged, obtain the background image of video image when target does not exist.
3. volume of the flow of passengers identification statistical method according to claim 1, it is characterized in that, described step 3) in enhancing image procossing is treated to the every two field picture of video image gathered, be specially: after binaryzation and range conversion are carried out to two field picture, again by mean filter or subsampling, obtain the enhancing image of current frame image.
4. volume of the flow of passengers identification statistical method according to claim 3, is characterized in that, described step 3) in every two field picture of video image of gathering whether to there is the decision method of moving target as follows:
3.1) two field picture of the video image collected is carried out piecemeal process, the i-th two field picture is designated as current frame image, then selected target image detection region ROI;
3.2) the i-th two field picture and the i-th-1 two field picture processed are carried out difference, the difference of selected target image surveyed area ROI is designated as N z;
3.3) motion is detected difference N zwith the warning least member quantity limit value N of setting 0make comparisons: if N z< N 0, then do not move; If N z>=N 0, then have motion to exist, retain current frame image;
3.4) to current frame image and background image I 0carry out difference, the difference of selected target image surveyed area ROI is designated as N b;
3.5) by target detection difference N bwith the warning least member quantity limit value N of setting 0' make comparisons: if N b< N 0', then there is no target to be detected; If N b>=N 0', then report to the police and there is the target of motion, then will there is the current frame image of moving target stored in array MovingImages [i] [j];
3.6) travel through all two field pictures, preserve all two field pictures having moving target to exist in order, stored in array MovingImages [i] [j].
5. volume of the flow of passengers identification statistical method according to claim 3, is characterized in that, described step 4) in judge whether the picture frame in array adopts following methods continuously:
4.1) from array, two field picture is got successively, carry out splitting based on the adaptive background of method of difference to present frame target image, the moving region image of acquisition is carried out binaryzation, with 2 × 2 construction operators, morphological erosion is carried out to the bianry image of target image, to remove pseudo-target (i.e. non-head image), again by the size of morphological dilations reduction realistic objective, obtain the bianry image only containing moving target connected domain;
4.2) bianry image of traversal only containing moving target connected domain, obtains the eigenwert of each target connected domain Area [n] [i], stored in corresponding chained list; Wherein, eigenwert parameter comprises: image index Num, target index Index, the horizontal ordinate X of target centroid and ordinate Y, the average gray Gray of corresponding original image target area, the length Length of target area rectangles encompass window and width W idth, the size Space of target area;
4.3) according to the eigenwert of target connected domain Area [n] [i] that obtain, the region of search of center of mass motion position p (xi, yi) of target connected domain Area [n+1] [i] in next frame image is predicted by Kalman filter;
4.4) eigenwert of target connected domain Area [n+1] [i] of next frame image is calculated
4.5) each target connected domain of plain Region Matching next frame image is searched in the Kalman prediction of current frame image, calculate the cost function value of all target connected domains in present frame target connected domain Area [n] [i] region of search corresponding to next frame image, and find out wherein minimum value;
4.6) calculate the centroid distance d of the arbitrary target area Area [n+1] [j] of target connected domain Area [n] [i] and the corresponding region of search of next frame image, and make comparisons with centroid distance limit value d0:
If d≤d0, what then target connected domain Area [n+1] [j] was target connected domain Area [n] [i] is follow-up, the value of target connected domain Area [n] [i] is substituted by the eigenwert of target connected domain Area [n+1] [j], and target connected domain Area [n+1] [j] is made marks, set up object chain;
If d > is d0, then target connected domain Area [n] [i] is not follow-up in the next frame;
4.7) according to judged result, piecemeal is carried out to the picture frame in array.
6. volume of the flow of passengers identification statistical method according to claim 5, is characterized in that, described step 5) in, carry out statistical counting to the picture frame after piecemeal, concrete grammar is as follows:
5.1) minimum value of the horizontal ordinate of target image surveyed area ROI and maximal value are saved in respectively in corresponding variable xin and xout, then read the first frame original sequence in array MovingImages [i] [j], be designated as the present frame target image of multiple target tracking statistical counting;
5.2) define counting variable PersonIn and PersonOut, record the destination number entering and leave respectively, Initialize installation is 0, and regulation along X-axis positive dirction for entering, inverse X-axis positive dirction is for going out;
5.3) to the position of the center-of-mass coordinate X of target connected domain Area [n] [i]:
As X≤xin, if first barycenter horizontal ordinate X1≤xin of the movement locus of target connected domain Area [n] [i], then illustrate that this target is hovered always in trough, any counter is failure to actuate; If X1>xin, then illustrate that target leaves tracking window, direction, for going out, goes out counter PersonOut and adds 1, and removes the object chain of target connected domain Area [n] [i]; Described first barycenter horizontal ordinate is the barycenter horizontal ordinate that target starts to enter tracking window most, namely at this target first barycenter horizontal ordinate that tracking window captures;
As X>xout, if the movement locus of target connected domain Area [n] [i] first center-of-mass coordinate horizontal ordinate X1 >=xout, then illustrate that this target is hovered always in trough, any counter is failure to actuate; If X1<xout, then illustrate that target enters tracking window, direction, for entering, is entered counter PersonIn and is added 1, and removes the object chain of target connected domain Area [n] [i];
Work as xin<X<xout, then illustrate that target connected domain Area [n] [i] is moved in detection zone, its last direction of motion is failed to understand, retains its eigenwert, waits for the tracking of this target in next frame; If there is coupling target in next frame, then by above-mentioned steps 4.3) ~ 4.6) set up object chain, otherwise as interference, abandon;
5.4), after all target connected domains in tracking window are mated, in checking present frame, whether all targets are tracked; If there is not tracked target, then judge any one whether its barycenter horizontal ordinate meet in X≤xin and X>xout, satisfied then indicate that fresh target occurs, for it sets up new object chain, obtain and recording feature value, proceed to step 4.3); Do not meet and be then judged as interference, abandon;
5.5), after in current frame image, all target recognizing and countings terminate, calculate the difference of the difference PersonOut working as advance counter PersonIn and go out counter, be designated as current volume of the flow of passengers CustomCounting, the target recognizing and counting of current frame image terminates;
5.6) successively the target connected domain of two field picture every in image array MovingImages [i] [j] is identified and Auto-counting, until the image sequence in image array MovingImages [i] [j] is all disposed, then whole target recognizing and counting terminates.
7. volume of the flow of passengers identification statistical method according to claim 5, is characterized in that, described step 4.5) in adopt improve object chain cost function tracking and matching is carried out to multiple goal, step is:
4.5.1) obtain three eigenwerts of multiple target in kth two field picture: the area of target centroid coordinate, average gray and target area, wherein the eigenwert of target i is designated as respectively: Point [k] [i], Gray [k] [i] and Space [k] [i];
4.5.2) Kalman filter is utilized to predict the region of search of target i in kth+1 two field picture in kth two field picture, the traversal region of search of target i in kth+1 two field picture, find n destination object, obtain their eigenwert, wherein the eigenwert of target j is designated as respectively: Point [k+1] [j], Gray [k+1] [j] and Space [k+1] [j];
4.5.3) intensity of variation of 3 eigenwerts of target i in n destination object and t two field picture in the region of search of t+1 two field picture target i is asked for respectively, ask for maximal value and be designated as MaxPoint, MaxGray and MaxSpace respectively, specific formula for calculation is as follows:
MaxPoint[i]=Max(sqrt((Point[k][i].x-Point[k+1][m].x)*(Point[k][i].x-Point[k+1][m].x)+(Point[k][i].y-Point[k+1][m].y)*(Point[k][i].y-Point[k+1][m].y)))
MaxGray[i]=Max(sqrt((Gray[k][i]-Gray[k+1][m])*(Gray[k][i]-Gray[k+1][m])))
MaxSpace[i]=Max(sqrt((Space[k][i]-Space[k+1][m])*(Space[k][i]-Space[k+1][m])))
Wherein, 1≤m≤n, to represent in the region of search of target i any one in n target; Wherein sqrt is sqrt computing;
4.5.4) ask for centroid distance size D [i] [j], grey scale change degree H [i] [j] and area change degree S [i] [j] of target j in target i and t+1 two field picture in t two field picture, corresponding computing formula is as follows:
D[i][j]=sqrt((Point[k][i].x-Point[k+1][j].x)*(Point[k][i].x-Point[k+1][j].x)+(Point[k][i].y-Point[k+1][j].y)*(Point[k][i].y-Point[k+1][j].y))/MaxPoint[i];
H[i][j]=sqrt((Gray[k][i]-Gray[k+1][j])*(Gray[k][i]-Gray[k+1][j]))/MaxGray[i];
S[i][j]=sqrt((Space[k][i]-Space[k+1][j])*(Space[k][i]-Space[k+1][j]))/MaxSpace[i]
4.5.5) cost function V [i] [j] can by step 4.5.4) required three eigenwert variable quantities try to achieve:
V[i][j]=αD[i][j]+βH[i][j]+γS[i][j];
Wherein α, β, γ tri-parameter coefficients represent the factor of influence of three eigenwerts, can in the light of actual conditions adjust.
8. volume of the flow of passengers identification statistical method according to claim 5, is characterized in that, described step 4.3) adopt the concrete steps in Kalman filter forecasting search region to be:
4.3.1) suppose that the motion of each target centroid in image is uniform motion, surround window area substantially constant, initialized target is at the movement velocity V in x direction x, movement velocity V in y-direction y, surround the pace of change Vt of window length and width, image sampling step-length T (time interval namely between every frame image sequence);
4.3.2) based on the target i in kth-1 two field picture, the maximum likelihood estimation result of the eigenwert of corresponding target i in kth two field picture is obtained: centroid position X-coordinate Point [k] [i] .x and Y-coordinate Point [k] [i] .y, encirclement length Win [k] [i] .length and width W in [k] [i] .width of window, estimation of deviation P [k] [i] of two characteristic quantities;
4.3.3) based on 4.3.2) result that step obtains, calculate eigenwert predicted value and the estimation of deviation of target i in kth+1 two field picture, concrete computing formula is as follows:
Point[k+1][i].x=Point[k][i].x+V x*T;
Point[k+1][i].y=Point[k][i].y+V y*T;
Win[k+1][i].length=Win[k][i].length+Vt*T;
Win[k+1][i].width=Win[k][i].width+Vt*T;
P[k+1][i]=P[k][i]+Q;
4.3.4) according to eigenwert predicted value and the estimation of deviation of target i in kth+1 two field picture, corresponding Kalman gain is calculated:
Kg[k+1][i]=P[k][i]/(P[k][i]+R)
Calculate each eigenwert of target i and the optimal estimation value of estimation of deviation in kth+1 two field picture, and upgrade their predicted value:
Point[k+1][i].x=Point[k+1][i].x+Kg[k+1][i]*(Z[k+1][i].x-Point[k+1][i].x);
Point[k+1][i].y=Point[k+1][i].y+Kg[k+1][i]*(Z[k+1][i].y-Point[k+1][i].y);
Win[k+1][i].length=Win[k+1][i].length+Kg[k+1][i]*(Z[k+1][i].length-Win[k+1]
[i].length);
Win[k+1][i].width=Win[k+1][i].width+Kg[k+1][i]*(Z[k+1][i].width-Win[k+1]
[i].width);
P[k+1][i]=(1-Kg[K+1][i])*P[k][i];
4.3.5) can obtain the forecasting search region (X, Y) of target i in kth+1 frame in kth two field picture is thus:
Point[k+1][i].x-1.5*Win[k+1][i].length/2≤X≤Point[k+1][i].x+1.5*Win[k+1][i].length/2;
Point[k+1][i].y-1.5*Win[k+1][i].width/2≤Y≤Point[k+1][i].y+1.5*Win[k+1][i].width/2。
9. volume of the flow of passengers identification statistical method according to claim 5, is characterized in that, described step 4.5) in object chain the performing step of the association coupling of target and the algorithm of data correlation be:
In the association coupling of target, if occur, in kth two field picture, target i is equal with the cost function value of a jth target in the forecasting search region of target i in kth+1 two field picture and m target, and be minimum simultaneously, then now the tracking and matching of target can clash, then adopt following innovatory algorithm, complete the correct tracking and matching of target;
Calculate the cost function of target j and m in target i and kth+1 two field picture in kth two field picture, if cost function value is equal and be minimum, then the gray feature value of above-mentioned 3 targets is designated as Gray [k] [i], Gray [k+1] [j], Gray [k+1] [m] respectively;
Ask for the average intensity change size of target j, m in target i and kth+1 two field picture in kth two field picture respectively, computing formula is:
Value1=sqrt((Gray[k][i]-Gray[k+1][j])*(Gray[k][i]-Gray[k+1][j]));
Value2=sqrt((Gray[k][i]-Gray[k+1][m])*(Gray[k][i]-Gray[k+1][m]));
If Value1<Value2, target j then in kth+1 two field picture is the follow-up of target i in kth two field picture, the eigenwert of target i in kth two field picture is replaced by the eigenwert of the target j in kth+1 two field picture, upgrade object chain, otherwise the target m then in kth+1 two field picture is the follow-up of target i in kth two field picture;
With the target i in kth two field picture there is no that the match is successful can with other object matchings in kth two field picture, or retain and wait in next frame and find coupling target.
CN201510063946.7A 2015-02-06 2015-02-06 A kind of volume of the flow of passengers identify statistical methods based on image information Active CN104637058B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510063946.7A CN104637058B (en) 2015-02-06 2015-02-06 A kind of volume of the flow of passengers identify statistical methods based on image information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510063946.7A CN104637058B (en) 2015-02-06 2015-02-06 A kind of volume of the flow of passengers identify statistical methods based on image information

Publications (2)

Publication Number Publication Date
CN104637058A true CN104637058A (en) 2015-05-20
CN104637058B CN104637058B (en) 2017-11-17

Family

ID=53215764

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510063946.7A Active CN104637058B (en) 2015-02-06 2015-02-06 A kind of volume of the flow of passengers identify statistical methods based on image information

Country Status (1)

Country Link
CN (1) CN104637058B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105957108A (en) * 2016-04-28 2016-09-21 成都达元科技有限公司 Passenger flow volume statistical system based on face detection and tracking
CN106127137A (en) * 2016-06-21 2016-11-16 长安大学 A kind of target detection recognizer based on 3D trajectory analysis
CN106127292A (en) * 2016-06-29 2016-11-16 上海小蚁科技有限公司 Effusion meter counting method and equipment
CN106127812A (en) * 2016-06-28 2016-11-16 中山大学 A kind of passenger flow statistical method of non-gate area, passenger station based on video monitoring
CN106204633A (en) * 2016-06-22 2016-12-07 广州市保伦电子有限公司 A kind of student trace method and apparatus based on computer vision
CN106355682A (en) * 2015-07-08 2017-01-25 北京文安智能技术股份有限公司 Video analysis method, device and system
CN106408080A (en) * 2015-07-31 2017-02-15 富士通株式会社 Counting apparatus and method of moving object
CN106778675A (en) * 2016-12-31 2017-05-31 歌尔科技有限公司 A kind of recognition methods of target in video image object and device
CN107909081A (en) * 2017-10-27 2018-04-13 东南大学 The quick obtaining and quick calibrating method of image data set in a kind of deep learning
CN109204106A (en) * 2018-08-27 2019-01-15 浙江大丰实业股份有限公司 Stage equipment mobile system
CN109272535A (en) * 2018-09-07 2019-01-25 广东中粤电力科技有限公司 A kind of power distribution room safety zone method for early warning based on image recognition
CN110334569A (en) * 2019-03-30 2019-10-15 深圳市晓舟科技有限公司 The volume of the flow of passengers passes in and out recognition methods, device, equipment and storage medium
CN110838134A (en) * 2019-10-10 2020-02-25 北京海益同展信息科技有限公司 Target object statistical method and device, computer equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110142282A1 (en) * 2009-12-14 2011-06-16 Indian Institute Of Technology Bombay Visual object tracking with scale and orientation adaptation
CN103986910A (en) * 2014-05-20 2014-08-13 中国科学院自动化研究所 A method and system for counting passenger flow based on intelligent analysis camera

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110142282A1 (en) * 2009-12-14 2011-06-16 Indian Institute Of Technology Bombay Visual object tracking with scale and orientation adaptation
CN103986910A (en) * 2014-05-20 2014-08-13 中国科学院自动化研究所 A method and system for counting passenger flow based on intelligent analysis camera

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
NGUYEN MINH DUC等: "Application of Kalman Filter to Rebuild Targets Movement from Radar Images of Marine Traffic in Tokyo Bay", 《PROCEEDINGS OF ASIA NAVIGATION CONFERENCE 2008》 *
付晓薇: "一种基于动态图像的多目标识别计数方法", 《中国优秀博硕士学位论文全文数据库 (硕士) 信息科技辑》 *
左美琳: "铁路客运枢纽客流图像识别系统设计与开发", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106355682A (en) * 2015-07-08 2017-01-25 北京文安智能技术股份有限公司 Video analysis method, device and system
CN106408080B (en) * 2015-07-31 2019-01-01 富士通株式会社 The counting device and method of moving object
CN106408080A (en) * 2015-07-31 2017-02-15 富士通株式会社 Counting apparatus and method of moving object
CN105957108A (en) * 2016-04-28 2016-09-21 成都达元科技有限公司 Passenger flow volume statistical system based on face detection and tracking
CN106127137A (en) * 2016-06-21 2016-11-16 长安大学 A kind of target detection recognizer based on 3D trajectory analysis
CN106204633A (en) * 2016-06-22 2016-12-07 广州市保伦电子有限公司 A kind of student trace method and apparatus based on computer vision
CN106204633B (en) * 2016-06-22 2020-02-07 广州市保伦电子有限公司 Student tracking method and device based on computer vision
CN106127812B (en) * 2016-06-28 2018-10-12 中山大学 A kind of passenger flow statistical method of the non-gate area in passenger station based on video monitoring
CN106127812A (en) * 2016-06-28 2016-11-16 中山大学 A kind of passenger flow statistical method of non-gate area, passenger station based on video monitoring
CN106127292B (en) * 2016-06-29 2019-05-07 上海小蚁科技有限公司 Flow method of counting and equipment
CN106127292A (en) * 2016-06-29 2016-11-16 上海小蚁科技有限公司 Effusion meter counting method and equipment
CN106778675A (en) * 2016-12-31 2017-05-31 歌尔科技有限公司 A kind of recognition methods of target in video image object and device
CN106778675B (en) * 2016-12-31 2019-11-08 歌尔科技有限公司 A kind of recognition methods of target in video image object and device
CN107909081A (en) * 2017-10-27 2018-04-13 东南大学 The quick obtaining and quick calibrating method of image data set in a kind of deep learning
CN109204106A (en) * 2018-08-27 2019-01-15 浙江大丰实业股份有限公司 Stage equipment mobile system
CN109204106B (en) * 2018-08-27 2020-08-07 浙江大丰实业股份有限公司 Stage equipment moving system
CN109272535A (en) * 2018-09-07 2019-01-25 广东中粤电力科技有限公司 A kind of power distribution room safety zone method for early warning based on image recognition
CN110334569A (en) * 2019-03-30 2019-10-15 深圳市晓舟科技有限公司 The volume of the flow of passengers passes in and out recognition methods, device, equipment and storage medium
CN110838134A (en) * 2019-10-10 2020-02-25 北京海益同展信息科技有限公司 Target object statistical method and device, computer equipment and storage medium
CN110838134B (en) * 2019-10-10 2020-09-29 北京海益同展信息科技有限公司 Target object statistical method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN104637058B (en) 2017-11-17

Similar Documents

Publication Publication Date Title
CN104637058B (en) A kind of volume of the flow of passengers identify statistical methods based on image information
CN113034548B (en) Multi-target tracking method and system suitable for embedded terminal
US10909695B2 (en) System and process for detecting, tracking and counting human objects of interest
CN107527009B (en) Remnant detection method based on YOLO target detection
CN102542289B (en) Pedestrian volume statistical method based on plurality of Gaussian counting models
CN102982313B (en) The method of Smoke Detection
CN103246896B (en) A kind of real-time detection and tracking method of robustness vehicle
CN103324913A (en) Pedestrian event detection method based on shape features and trajectory analysis
CN106981202A (en) A kind of vehicle based on track model lane change detection method back and forth
CN113139521A (en) Pedestrian boundary crossing monitoring method for electric power monitoring
CN102521565A (en) Garment identification method and system for low-resolution video
CN111681382A (en) Method for detecting temporary fence crossing in construction site based on visual analysis
CN110991397B (en) Travel direction determining method and related equipment
CN108734172B (en) Target identification method and system based on linear edge characteristics
CN106023248A (en) Real-time video tracking method
CN106778637B (en) Statistical method for man and woman passenger flow
Anandhalli et al. Improvised approach using background subtraction for vehicle detection
CN117911965A (en) A method and device for identifying highway traffic accidents based on aerial images
CN102194270B (en) Statistical method for pedestrian flow based on heuristic information
CN106951820B (en) Passenger flow statistical method based on annular template and ellipse fitting
Heimbach et al. Improving object tracking accuracy in video sequences subject to noise and occlusion impediments by combining feature tracking with Kalman filtering
CN105447463A (en) Camera-crossing automatic tracking system for transformer station based on human body feature recognition
Song et al. An accurate vehicle counting approach based on block background modeling and updating
CN107403137B (en) Video-based dense crowd flow calculation method and device
CN117078718A (en) Multi-target vehicle tracking method in expressway scene based on deep SORT

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20190513

Address after: 430000 8 East and West Lake District, Wuhan, Hubei

Patentee after: Hubei Liangpin Puzi Logistics Co., Ltd.

Address before: 430081 Peace Avenue 947 Qingshan District, Wuhan City, Hubei Province

Patentee before: Wuhan University of Science and Technology

CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 430000 8 East and West Lake District, Wuhan, Hubei

Patentee after: Hubei liangpinpu Supply Chain Technology Co., Ltd

Address before: 430000 8 East and West Lake District, Wuhan, Hubei

Patentee before: Hubei Liangpin Puzi Logistics Co.,Ltd.