Nothing Special   »   [go: up one dir, main page]

CN111950499A - Method for detecting vehicle-mounted personnel statistical information - Google Patents

Method for detecting vehicle-mounted personnel statistical information Download PDF

Info

Publication number
CN111950499A
CN111950499A CN202010849899.XA CN202010849899A CN111950499A CN 111950499 A CN111950499 A CN 111950499A CN 202010849899 A CN202010849899 A CN 202010849899A CN 111950499 A CN111950499 A CN 111950499A
Authority
CN
China
Prior art keywords
vehicle
processor
face
data
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010849899.XA
Other languages
Chinese (zh)
Inventor
刘三军
杨雄威
孙先波
来国红
胡俊鹏
谭建军
朱黎
黄勇
徐建
高仕红
田相鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Enshi Shida Electronic Information Technology Co ltd
Hubei University for Nationalities
Original Assignee
Enshi Shida Electronic Information Technology Co ltd
Hubei University for Nationalities
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Enshi Shida Electronic Information Technology Co ltd, Hubei University for Nationalities filed Critical Enshi Shida Electronic Information Technology Co ltd
Priority to CN202010849899.XA priority Critical patent/CN111950499A/en
Publication of CN111950499A publication Critical patent/CN111950499A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C17/00Arrangements for transmitting signals characterised by the use of a wireless electrical link
    • G08C17/02Arrangements for transmitting signals characterised by the use of a wireless electrical link using a radio link
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for detecting vehicle-mounted personnel statistical information, which comprises a plurality of cameras and a processor, wherein the plurality of cameras are used for collecting images in a vehicle and smoothly collecting human faces in the vehicle; and then, calculating the distance from the camera according to the size of the collected face, and establishing a space model according to the distance, thereby helping to accurately count the number of the personnel. The problem of accurate statistics each vehicle traffic in time and space can be solved, all people's face in the car can be gathered smoothly comprehensively to the face identification technique that this scheme provided of use, and then by the sex analysis algorithm of car implantation treater and age analysis algorithm statistics in the car sex ratio and the distribution of age of personnel in the car for sightseeing vehicle operation adjustment and scenic spot development in the scenic spot are convenient more effective.

Description

Method for detecting vehicle-mounted personnel statistical information
Technical Field
The invention relates to the field of scenic spot management, in particular to a method for detecting vehicle-mounted personnel statistical information.
Background
With the continuous improvement of living standard, the development of the transportation tool is rapid, and the scenic spot sightseeing vehicle becomes an indispensable part of many scenic spots. However, while people enjoy convenience and comfort, many problems are revealed: 1) when the passenger flow is large, the ticket collection in many places is disordered, and the phenomena of ticket stealing and ticket evasion of passengers occur occasionally; 2) drivers face huge passenger flow crowds, overload behaviors of the supermen are frequent, and a large number of potential safety hazards and contradiction disputes are accompanied; 3) in daily operation management planning, if the pedestrian volume of each position in a scenic spot cannot be determined, the problem of unreasonable planning of vehicle departure time occurs, and great economic cost loss and resource waste are caused.
Therefore, how to accurately count the pedestrian volume in different areas in different time periods on each vehicle is the key point for solving the problem. In traditional vehicle pedestrian volume statistics, a detection counting mode based on infrared induction is mostly adopted. The basic principle is that infrared detection devices are installed at each passageway of a vehicle, when passengers pass through the infrared detection devices, signal transmission between a transmitter and a receiver can be shielded, pulse signals can be generated by the infrared detection devices and transmitted to a processor, and the processor obtains the passing passenger flow according to pulse frequency statistics. However, this has a number of disadvantages: first, infrared light is very susceptible to interference from external light, so that the accuracy of the generated statistical data is not high, and the sensor part of the infrared device is easily shielded by external impurities, resulting in a large error of the received signal. According to the infrared principle, when a plurality of passengers get on the vehicle at the same time, the passengers may be mistakenly judged to be only one passenger according to the light shielding condition, or different body parts of the same passenger are judged to be a plurality of passengers. Finally, because the heights of the passengers are inconsistent, the installation position of the infrared device is difficult to grasp, and blind areas or misjudgment are easily caused. These factors all contribute to the bias of the people flow statistics in the vehicle.
Disclosure of Invention
The present invention is directed to solving the above problems and providing a method for detecting vehicle-mounted demographic information. According to the scheme, a plurality of depth cameras are installed in the bus to capture images in the bus in an all-around mode, people in the bus are counted by combining a face recognition detection algorithm, personnel information on the road section is obtained by calling age and gender recognition algorithms, information such as age distribution and gender distribution of tourists playing in different areas is mastered, different projects are developed in scenic spots by combining the information, and different propaganda is carried out for the tourists of different ages and different genders.
The invention realizes the purpose through the following technical scheme:
the invention adopts a plurality of cameras arranged in the motor vehicle to collect the images in the vehicle, and the plurality of cameras in the vehicle can smoothly collect the faces in the vehicle, thereby avoiding blind areas when a single camera collects the faces and reducing the influence of shielding. The multiple-camera installation scheme provided by the invention is shown in figure 1, wherein the camera number 1 is closest to a driver in a vehicle, and the camera numbers 2, 3 and 4 are sequentially arranged in a clockwise direction. In order to shoot the human face in the car comprehensively and reduce the repeated occurrence rate of the human face, the included angle between the camera and the long edge in the car ranges from 30 degrees to 60 degrees, and the camera is rotatable and can be adjusted correspondingly according to the lengths of different cars.
The invention adopts the processor to complete the processing of the face data collected by the camera in the vehicle, which is the key point of the patent. The method mainly comprises the following two processes of firstly comprehensively using a convolutional neural network and a Local Face Analysis (Local feature Analysis) Face recognition algorithm to improve the recognition accuracy in the process of processing data acquired by a camera; and then, calculating the distance from the camera according to the size of the collected face, and establishing a space model according to the distance, thereby helping to accurately count the number of the personnel. The difficulty of how to count when one person appears in two cameras at the same time is also a big technical highlight designed by the patent. In the design of the invention, 4 cameras are arranged on a vehicle, each camera is connected with a processor through a data line and sequentially comprises a processor 1, a processor 2, a processor 3 and a processor 4, wherein the computing capacity of the processor 1 is more than 30M FLOPS, and the processor is a main processor in the system, and most operations in the system are completed in the processor. The processor 2, the processor 3 and the processor 4 have the operation capability not lower than 10M FLOPS, are slave processors, mainly complete a face recognition algorithm and transmit processed data to a main processor through a data bus. And after the data collected by each camera is collected by the camera, 32 face characteristic vectors are extracted from the data collected by the camera corresponding to each processor. The 4 processors realize communication between the processors through TCP/IP (Transmission Control Protocol/Internet Protocol). The 3 slave processors process the master processor to which the feature data is transmitted. Because mutual independence when 4 cameras gather the image, the people face that has the recurrence exists in the image of gathering, for solving this problem, according to the position that the camera is located in the car and the people that finds in the investigation often stands orientation, we set up different register addresses with 4 treater, divide the treater into four processing priority levels: processor 1> processor 3> processor 2> processor 4. Based on the face feature data collected by the No. 1 camera, all faces which can be identified in the image are identified, the total number N of the current faces is recorded, and the faces are classified according to gender and age. And comparing and accumulating the face characteristic data transmitted by the three slave processors according to the priority order. For the position of the cameras in the vehicle, as shown in fig. 1. When the face feature data collected by the camera is analyzed, when more than one third of the face with the feature vector similarity reaching 47.2% exists in the camera 1 and the camera 3, the face is judged to be repeated, and the total number N of the faces is not counted. When the similarity of more than two thirds of feature vectors in the data collected by the camera 2 and the camera 1 reaches 82.4%, and the similarity of more than two thirds of feature vectors in the data collected by the camera 4 and the camera 3 reaches 82.4%, the human face is judged to appear in the camera 1 or the camera 3, and the total number N is not counted. Because the common characteristics between twins are more, the similarity is higher, in order to avoid the appearance of this type of error, in the design of this work, the model has been established different spatial position models to different motorcycle types, every treater can combine the model record of car under every people's face corresponding position coordinate when extracting people's face from the background, uses face identification as the main when the core processor handles data, and spatial position is for assisting, analyzes the passenger quantity in the car jointly. The invention has the obvious advantages that the human face feature data are processed by adopting edge calculation instead of being transmitted to a terminal for processing after being collected, a large-scale server is not required to be built for processing the human face feature data, and meanwhile, the data loss in the transmission process of the data is avoided.
The design also has an important function of criminal identification and can help police department to catch criminals. In the design of the invention, the main processor is provided with an interface which is externally connected with a storage device provided by a public security department, and the face information of the current criminal evasion is stored in the storage device. The processor can access a criminal database prepared for the system by a public security department through GPRS to download updated criminal face information in time, and access with a public security criminal system library is completed once a day at regular time so as to keep having complete and specific criminal face information. When the in-vehicle processor finishes one-time face recognition, the main processor obtains all face information in the vehicle and then compares the face information with data of an external storage device, when a group of data is found to have similarity of more than 18 feature vectors with the data in the system base to reach 58.5%, the system classifies the person as a possible criminal, and properly adjusts the position of the camera according to the position of the criminal to capture detailed face data of the person; when the similarity of more than 24 characteristic vectors reaches 81.7%, the person is judged to be escaping from the criminal, the system sends early warning to the terminal and the public security department in time through GPRS, meanwhile, the main processor sends out an instruction for collecting all information to other processors, and all cameras transmit collected images to the main processor and then return the collected images to the management center and the public security department. Therefore, corresponding precaution and catching measures are taken.
The system designed by the invention judges whether to carry out face recognition by detecting the vehicle state and controls the vehicle safety, the core processor is connected with a speedometer of the vehicle through a detection module, when the vehicle is detected to be started and run at a speed (1km/h) higher than a certain speed, the core processor sends a working instruction to the slave processor, and each processor in the vehicle captures the face in the vehicle through the camera. After the core processor completes one-time face recognition statistical analysis, the obtained face result data and the data are transmitted to the terminal through GPRS, and the terminal can obtain a clear data form by slightly processing the data. The whole system runs once every one minute after the vehicle is started every time, the camera rotates 15 degrees towards the direction far away from the long edge after running for three times, the position of the camera is recovered to the initial state and the processor automatically enters the dormant state, the running state is reactivated after the vehicle is detected to stop running, and if the single running time of the vehicle is short, the whole system executes the current task and sends out data to enter the dormant state when the vehicle speed is detected to be 0. Therefore, the whole system has the characteristic of relatively low power consumption.
The invention has the beneficial effects that:
the invention is a method for detecting the statistical information of the vehicle carried personnel, compared with the prior art, the invention can solve the problem of accurately counting the pedestrian flow of each vehicle in time and space, the face recognition technology provided by the scheme can comprehensively and smoothly collect all the faces in the vehicle, and then the sex proportion and the age distribution of the personnel in the vehicle are counted by the sex analysis algorithm and the age analysis algorithm of the processor implanted in the vehicle, so that the operation adjustment of sightseeing vehicles in scenic spots and the scene development in the scenic spots are more convenient and effective.
Drawings
FIG. 1 is a profile of a depth camera within a vehicle;
FIG. 2 is an overall architecture of the system;
fig. 3 is a flow chart of the system implementation.
Detailed Description
The invention will be further described with reference to the accompanying drawings in which:
the invention explains a concrete design scheme and a principle structure based on face recognition counting by taking a bus provided with four cameras as an example, as shown in figure 1, depth cameras are arranged at four different positions of the bus, when a sensor hung on a processor 1 detects that the bus is started, the processor 1 sends a working instruction to the cameras, the cameras start to work, the four cameras in different directions capture images on the bus in an omnibearing manner and then return data to the processor 1 for processing, and the processing of the processor 1 mainly comprises the following steps: firstly, preprocessing image data, namely eliminating or reducing noise and other interferences of data sampling so as to improve the signal-to-noise ratio, eliminate or reduce data image blurring and geometric distortion, improve definition and change the structure of a mode; and then, feature extraction is carried out, features which have obvious effect on face recognition in the picture are extracted, and the dimension of the data can be compressed through the feature extraction, so that the data are convenient to process and the loss is reduced. After the characteristics are extracted, the human Face is identified and classified, the human Face is extracted from the background according to the characteristics of the human Face, then a spatial position model corresponding to the human Face is established by comprehensively using a convolutional neural network and an algorithm of Local Face Analysis (Local characteristic Analysis) human Face identification in combination with a triangulation principle so as to determine the number of people on the vehicle and the position of each person, then repeated parts in different pictures are filtered, and then the people in the vehicle are classified according to gender, age and the like. Meanwhile, a temporary database is established locally and compared with a crime system library downloaded to a local public security department, and after the suspicious molecule is identified, the system sends a warning to the terminal in the background. Finally, the local data is returned to the management end through the GPRS, so that a manager can conveniently master people flow data so as to adjust and operate in a later period.
In order to achieve no dead angle capture of images in the vehicle, the design of the text installs cameras at four different positions in the vehicle. Therefore, the dead zone and the shielding problem existing in a single camera are prevented, and the purpose of capturing the faces in different directions at various positions in the vehicle is achieved.
In order to efficiently analyze and process data and simplify the data transmission process, the data captured by the camera directly runs an AdaBoost face detection algorithm and a personnel information analysis algorithm based on Haar characteristics in an ARM framework of the in-vehicle device to obtain personnel information in the vehicle, and finally, the simple data is transmitted to the cloud end through network communication, so that the data quantity to be transmitted is greatly reduced, and the information acquisition efficiency is improved.
In order to improve the accuracy and speed of data processing, the patent comprehensively uses a convolutional neural network and a characteristic face method to process data, and the convolutional neural network mainly comprises the following 5 structures in the construction process: input layer, convolution layer, pooling layer, full link layer and softmax layer. The input layer is the input to the entire neural network, which in the convolutional neural network of this patent represents the pixel matrix of the picture. According to the difference of the channel number, the matrix in the picture pixel has different depth values. For example, a black and white picture has only 1 channel and therefore has a depth of 1, while in the RGB color mode the image has 3 channels and therefore has a depth of 3, which is 3 in the design of this patent. The convolutional layer is composed of a series of feature maps obtained by performing convolution operations. Each unit in the convolution layer has a connection relation with the unit of a region in the previous layer, and the size of the region is the size of the convolution kernel. The convolution kernel sizes are commonly 3 x 3 or 5 x 5, and we use 5 x 5 convolution kernels. To get more abstract features from each small block in the neural network, each cell matrix of our convolutional layer is deeper than the cell matrix of the previous layer. In our design, the depth of the cell matrix of the pooling layer is basically the same as that of the cell matrix of the previous layer, but the size of the matrix is reduced in the width and height directions.
How to classify the data is also one of the key points in the patent. In the design of the present invention, the classification is mainly divided into two major categories: gender differentiation is distinguished from age group. During the previous training, 600 face data were collected, wherein the face data were collected from 10 to 70 years of age, each 5 years of age, and the face data were 12 age groups, wherein 50 persons in each age group, the proportion of male to female is 1: 1. in the training process, the human face can detect a plurality of different feature vectors, 32 face feature vectors with larger feature value differences are selected from the feature vectors for face classification, and a template face image database is established according to common features extracted by people with the same gender and age. After the face recognition is completed, the image feature vector of each person is compared with the template library, and the image feature vector is classified into a class with higher feature similarity.
This patent still has a bright spot and is exactly to have the security protection function, can distinguish whether to have criminals in the car, and automatic early warning at the background to in time give information feedback to the police department with the terminal. The process of identifying criminals is basically the same as that described above, and we first contact with the public security department to obtain the face database of the criminal, and install a disk specially used for storing such data on the local end of the vehicle. When each recognition is completed, the model of the owner of the vehicle is compared with the data information in the database. After each recognition is finished, feature comparison is carried out on the mobile phone with a public security library, if the similarity exceeds 80%, the mobile phone reminds a background, meanwhile, a second recognition process is started, if the similarity reaches 90%, a background alarm is given, and data collected by the cameras are all transmitted to a terminal and a public security department so as to take corresponding precautionary measures.
The influence of light on vision is large, and even for a human, misrecognition occurs because light is too dark. Reflecting the identification of the face image, the illumination can influence the structure of the target image, so that the contour and the texture of the target can be deviated; the face images obtained under different lighting conditions will also be different for the same person. Therefore, for a human face image recognition system with strong universality, illumination is a very necessary factor. Illumination is always the key point and difficulty of image recognition, and in the design of the patent, a face feature detection technology based on a Principal Component Analysis (PCA) method is adopted to weaken the influence of illumination. Firstly, when training a sample, constructing an association matrix and performing KL transformation on the association matrix, obtaining 16 eigenvectors, and performing reconstruction operation on a face image for detecting the representation capability of the eigenvectors. The reconstruction formula is as follows:
Figure BDA0002644364860000081
in the formula IrecThe reconstructed face image is obtained; mu.siIs a feature vector; n is the number of the used eigenvectors; omegaiThe projection coefficient obtained by projecting the face image to the feature space is obtained by the following formula:
ωi=μT iI (i=1,2,…,N)
the reconstruction error is:
IN err=I–Irec
in the training process, with the increase of the number of training samples, the feature vector space obtained by the PCA method can better reflect the face image space, and the effect of the training samples is basically eliminated after all 600 training samples adopted by the method are trained. The final results show that although light has some effect on the PCA method, it has little effect on the final results.
The scale is also a factor that must be considered when performing face recognition. When the camera is used for collecting images in the vehicle, because the distance between each person and the camera is different, the scales of each face in the obtained images are different, the difference between the images with different scales is large, and the PCA method is adopted for identifying the image scales. Firstly, M training images with the scale of S are given, and a feature vector mu of an image set is obtained by utilizing a PCA methods 1,,μs 2,μs 3,…,μs M,From these vectors, a template Ω is formedsWhile giving the threshold Θ of the vector setsWhen the image I to be measured and the template omega are detectedsThe distance d between is less than thetasIf so, the dimension of I is S; otherwise, the dimension of I is not S. And (4) constructing different scale image templates by giving an image set with different scales, and forming a local library by the templates. After the camera collects the images in the vehicle, the distance d between the I and the template with the same scale is obtained according to the following formulai
di=||I–Iface-space||
Wherein I is a face image to be recognized, Iface-spaceThe resulting image is projected into the face image space for I.
Distance d is takeniAnd matching the scale template corresponding to the minimum value to finally obtain the scale data of the individual face in the image.
The foregoing shows and describes the general principles and features of the present invention, together with the advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (5)

1. A method for detecting vehicle-mounted personnel statistical information is characterized in that: the system comprises a plurality of cameras and a processor, wherein the cameras collect images in the vehicle and can smoothly collect faces in the vehicle, the processor mainly completes the following two processes in the process of processing data collected by the cameras, and firstly, the face recognition algorithm of a convolutional neural network and a local feature analysis method is comprehensively used to improve the recognition accuracy; and then, calculating the distance from the camera according to the size of the collected face, and establishing a space model according to the distance, thereby helping to accurately count the number of the personnel.
2. The method of detecting on-board personal statistics as recited in claim 1, wherein: the number of the cameras is 4, the number of the processors is 4, each camera is connected with one processor, the computing power of the first processor is more than 30M FLOPS, the first processor is a main processor and is used as the main processor, most of operations are completed by the main processor, the computing powers of the second processor, the third processor and the fourth processor are not less than 10M FLOPS, the second processor, the third processor and the fourth processor are used as subordinate processors and mainly complete a face recognition algorithm and transmit processed data to the main processor through a data bus; when the face feature data collected by the cameras are analyzed, when more than one third of the faces with feature vector similarity reaching 47.2% exist in the first camera and the third camera, the face is judged to be repeated, and the total number of the faces is not counted; when the similarity of more than two thirds of feature vectors in the data collected by the second camera and the first camera reaches 82.4%, and the similarity of more than two thirds of feature vectors in the data collected by the fourth camera and the third camera reaches 82.4%, the human face is judged to appear in the first camera or the third camera, and the total number is not counted.
3. The method of detecting on-board personal statistics as claimed in claim 2, characterized in that: different spatial position models are established for different vehicle types, each processor extracts faces from the background and records corresponding position coordinates of each face in combination with the vehicle model, and the main processor mainly identifies the faces and assists in spatial position data processing to jointly analyze the number of passengers in the vehicle.
4. The method of detecting on-board personal statistics as claimed in claim 3, characterized in that: the main processor is provided with an interface which is externally connected with a storage device provided by a public security department, wherein the face information of the current criminal evasion is stored; the main processor can access a criminal database prepared by a public security department for the system through GPRS to download updated criminal face information in time, and access with a public security criminal system library is completed once a day at regular time so as to keep having complete and specific criminal face information; when the in-vehicle processor finishes one-time face recognition, the main processor obtains all face information in the vehicle and then compares the face information with data of an external storage device, when a group of data is found to have similarity of more than 18 feature vectors with the data in the system base to reach 58.5%, the system classifies the person as a possible criminal, and properly adjusts the position of the camera according to the position of the criminal to capture detailed face data of the person; when the similarity of more than 24 characteristic vectors reaches 81.7%, the person is judged to be escaping from the criminal, the system sends early warning to the terminal and the public security department in time through GPRS, meanwhile, the main processor sends out an instruction for collecting all information to other processors, and all cameras transmit collected images to the main processor and then return the collected images to the management center and the public security department. A
5. The method of detecting on-board personal statistics as claimed in claim 3, characterized in that: the main processor is connected with a speedometer of the vehicle through a detection module, when the vehicle is detected to be started and run for more than 1km/h, the main processor sends a working instruction to the slave processors, each processor in the vehicle captures the face in the vehicle through the camera, after the main processor completes the recognition statistical analysis of the face once, the obtained face result data and the face result data are transmitted to a terminal through GPRS, the terminal can obtain a clear data table by slightly processing the data, the whole system runs once every minute after the vehicle is started every time, the camera rotates 15 degrees towards the direction far away from the long edge after running for three times, the position of the camera is restored to the initial state and the processor automatically enters a dormant state after running for three times, the running state is reactivated after the vehicle is detected to stop, if the single running time of the vehicle is short, the whole system executes the current task and sends the data out to enter the dormant state when the vehicle speed is detected to be 0.
CN202010849899.XA 2020-08-21 2020-08-21 Method for detecting vehicle-mounted personnel statistical information Pending CN111950499A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010849899.XA CN111950499A (en) 2020-08-21 2020-08-21 Method for detecting vehicle-mounted personnel statistical information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010849899.XA CN111950499A (en) 2020-08-21 2020-08-21 Method for detecting vehicle-mounted personnel statistical information

Publications (1)

Publication Number Publication Date
CN111950499A true CN111950499A (en) 2020-11-17

Family

ID=73359228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010849899.XA Pending CN111950499A (en) 2020-08-21 2020-08-21 Method for detecting vehicle-mounted personnel statistical information

Country Status (1)

Country Link
CN (1) CN111950499A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819292A (en) * 2021-01-14 2021-05-18 湖南海龙国际智能科技股份有限公司 Scenic spot tourist car intelligent scheduling system and method for detecting vacant seats
CN114283386A (en) * 2022-01-28 2022-04-05 浙江传媒学院 Analysis and adaptation intensive scene people stream real-time monitoring system based on big data
CN114694284A (en) * 2022-03-24 2022-07-01 北京金和网络股份有限公司 Special vehicle driver identity verification method and device
CN116524569A (en) * 2023-05-10 2023-08-01 深圳大器时代科技有限公司 Multi-concurrency face recognition system and method based on classification algorithm
CN118155264A (en) * 2024-03-27 2024-06-07 河北汉佳电子科技有限公司 Vehicle inspection method, device, terminal and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296307A (en) * 2016-08-24 2017-01-04 郑州天迈科技股份有限公司 Electronic stop plate advertisement delivery effect based on recognition of face analyzes method
CN107004133A (en) * 2017-02-16 2017-08-01 深圳市锐明技术股份有限公司 Patronage statistical method and device in a kind of vehicle
CN107239762A (en) * 2017-06-06 2017-10-10 电子科技大学 Patronage statistical method in a kind of bus of view-based access control model
CN110516600A (en) * 2019-08-28 2019-11-29 杭州律橙电子科技有限公司 A kind of bus passenger flow detection method based on Face datection
CN111160243A (en) * 2019-12-27 2020-05-15 深圳云天励飞技术有限公司 Passenger flow volume statistical method and related product

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296307A (en) * 2016-08-24 2017-01-04 郑州天迈科技股份有限公司 Electronic stop plate advertisement delivery effect based on recognition of face analyzes method
CN107004133A (en) * 2017-02-16 2017-08-01 深圳市锐明技术股份有限公司 Patronage statistical method and device in a kind of vehicle
CN107239762A (en) * 2017-06-06 2017-10-10 电子科技大学 Patronage statistical method in a kind of bus of view-based access control model
CN110516600A (en) * 2019-08-28 2019-11-29 杭州律橙电子科技有限公司 A kind of bus passenger flow detection method based on Face datection
CN111160243A (en) * 2019-12-27 2020-05-15 深圳云天励飞技术有限公司 Passenger flow volume statistical method and related product

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819292A (en) * 2021-01-14 2021-05-18 湖南海龙国际智能科技股份有限公司 Scenic spot tourist car intelligent scheduling system and method for detecting vacant seats
CN114283386A (en) * 2022-01-28 2022-04-05 浙江传媒学院 Analysis and adaptation intensive scene people stream real-time monitoring system based on big data
CN114694284A (en) * 2022-03-24 2022-07-01 北京金和网络股份有限公司 Special vehicle driver identity verification method and device
CN114694284B (en) * 2022-03-24 2024-03-22 北京金和网络股份有限公司 Special vehicle driver identity verification method and device
CN116524569A (en) * 2023-05-10 2023-08-01 深圳大器时代科技有限公司 Multi-concurrency face recognition system and method based on classification algorithm
CN118155264A (en) * 2024-03-27 2024-06-07 河北汉佳电子科技有限公司 Vehicle inspection method, device, terminal and storage medium

Similar Documents

Publication Publication Date Title
CN109686109B (en) Parking lot safety monitoring management system and method based on artificial intelligence
CN110487562B (en) Driveway keeping capacity detection system and method for unmanned driving
CN111950499A (en) Method for detecting vehicle-mounted personnel statistical information
CN102426786B (en) Intelligent video analyzing system and method for automatically identifying fake plate vehicle
CN110660222B (en) Intelligent environment-friendly electronic snapshot system for black-smoke road vehicle
CN110008298B (en) Parking multidimensional information perception application system and method
US8339282B2 (en) Security systems
CN110223511A (en) A kind of automobile roadside is separated to stop intelligent monitoring method and system
CN103824037B (en) Vehicle anti-tracking alarm device
CN111860274B (en) Traffic police command gesture recognition method based on head orientation and upper half skeleton characteristics
CN101587544B (en) Based on the carried on vehicle antitracking device of computer vision
US20140028842A1 (en) Calibration device and method for use in a surveillance system for event detection
CN106541968B (en) The recognition methods of the subway carriage real-time prompt system of view-based access control model analysis
WO2004042673A2 (en) Automatic, real time and complete identification of vehicles
CN103366506A (en) Device and method for automatically monitoring telephone call behavior of driver when driving
Park et al. Real-time signal light detection
CN108520528B (en) Mobile vehicle tracking method based on improved difference threshold and displacement matching model
CN110516600A (en) A kind of bus passenger flow detection method based on Face datection
CN112950942A (en) Intelligent detection method for motor vehicle illegal parking based on video analysis
CN114120250A (en) Video-based method for detecting illegal people carried by motor vehicle
CN111241918B (en) Vehicle tracking prevention method and system based on face recognition
KR102122853B1 (en) Monitoring system to control external devices
CN113128540B (en) Method and device for detecting vehicle theft behavior of non-motor vehicle and electronic equipment
CN116630853A (en) Real-time video personnel tracking method and system for key transportation hub
CN114373303A (en) Mobile modularized public security intelligent inspection station system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination