CN109359548B - Multi-face recognition monitoring method and device, electronic equipment and storage medium - Google Patents
Multi-face recognition monitoring method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN109359548B CN109359548B CN201811097371.0A CN201811097371A CN109359548B CN 109359548 B CN109359548 B CN 109359548B CN 201811097371 A CN201811097371 A CN 201811097371A CN 109359548 B CN109359548 B CN 109359548B
- Authority
- CN
- China
- Prior art keywords
- face
- information
- image
- feature information
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C1/00—Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
- G07C1/10—Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people together with the recording, indicating or registering of other data, e.g. of signs of identity
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Collating Specific Patterns (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the application discloses a multi-face recognition monitoring method and device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a scene image; in response to the condition that the scene image comprises at least one face image, extracting face feature information of each face image in the at least one face image; verifying the face feature information of each face image and template face feature information in a face feature set to obtain a verification result; responding to the condition that the verification result is that the verification is passed, and acquiring user information corresponding to the face feature information of each face image; the method and the device for the identification of the multiple persons in the scene can identify the multiple persons in the scene, particularly can rapidly implement series of operations such as attendance checking, monitoring and the like aiming at a multi-person flow channel, and improve the identification efficiency and the monitoring safety.
Description
Technical Field
The present application relates to the field of image processing, and in particular, to a multi-face recognition monitoring method, a multi-face recognition monitoring apparatus, an electronic device, and a computer-readable storage medium.
Background
Traditional attendance and monitoring need to be finished in specific scenes, angles and ranges, and a large amount of time is consumed under the condition that the number of monitoring objects or attendance personnel is large. According to the monitoring scheme of the common face recognition, the number of attendance personnel is large, if face data are collected one by one, or face recognition and card punching processes are carried out on the attendance personnel in sequence, a large amount of time is consumed, the efficiency is low, and the recognition rate is poor under the actual complex condition.
Disclosure of Invention
The embodiment of the application provides a multi-face recognition monitoring method, a multi-face recognition monitoring device, an electronic device and a computer readable storage medium, which can identify a plurality of persons in a scene, particularly can rapidly implement series of operations such as attendance checking and monitoring aiming at a multi-person flow channel, and improve the recognition efficiency and the monitoring safety.
A first aspect of an embodiment of the present application provides a method for monitoring multiple face identifications, including:
acquiring a scene image;
in response to the condition that the scene image comprises at least one face image, extracting face feature information of each face image in the at least one face image;
verifying the face feature information of each face image and template face feature information in a face feature set to obtain a verification result;
responding to the condition that the verification result is that the verification is passed, and acquiring user information corresponding to the face feature information of each face image;
and generating and storing at least one pass record of the user information.
In an optional embodiment, after acquiring the scene image, the method further includes:
and carrying out face detection on the scene image to obtain the face images and the number thereof contained in the scene image.
In an optional implementation manner, after obtaining the face images and the number thereof included in the scene image, the method further includes:
judging whether the number of the face images is larger than a preset threshold value or not;
if yes, sequentially verifying the face images in batches, wherein the number of the face images to be verified in each batch is not more than the preset threshold value;
and if not, extracting the face feature information of all the face images in the scene image.
In an optional implementation manner, before extracting the facial feature information of each facial image in the at least one facial image, the method further includes:
performing boundary detection on each face image in the at least one face image to obtain a face image meeting a preset condition;
performing living body detection on the face image meeting the preset conditions to obtain a face image which passes the living body detection;
and extracting the face characteristic information of the face image passing the living body detection.
In an optional embodiment, the method further comprises:
in response to the condition that the verification result is that the verification fails, repeatedly executing the step of verifying the face feature information of the face image and the face feature information in the face feature set, and counting the repeated execution times;
and in response to the condition that the execution times reach preset times, judging the user corresponding to the face image as a strange visitor and sending prompt information.
In an optional implementation manner, after the acquiring the user information corresponding to the facial feature information of each facial image, the method further includes:
and displaying the user information and the verification result.
In an optional embodiment, after generating and storing the pass record of the user information, the method further includes:
and sending the passing record of the user information to a server.
In an optional implementation manner, before verifying the facial feature information of each facial image with the facial feature information in the template facial feature set, the method further includes:
receiving user data information, wherein the user data information comprises template face feature information;
and generating and storing the user information according to the user data information, and storing the template face feature information in the face feature set.
A second aspect of the embodiments of the present application provides a multi-face recognition monitoring apparatus, including:
the first acquisition module is used for acquiring a scene image;
the feature extraction module is used for responding to the condition that the scene image comprises at least one face image and extracting the face feature information of each face image in the at least one face image;
the verification module is used for verifying the face feature information of each face image and the template face feature information in the face feature set to obtain a verification result;
the second acquisition module is used for responding to the condition that the verification result is that the verification is passed, and acquiring user information corresponding to the face feature information of each face image;
and the recording module is used for generating and storing the passage record of the user information.
In an optional implementation manner, the multiple face recognition monitoring apparatus further includes a quantity detection module, configured to perform face detection on the scene image to obtain the face images and the number thereof included in the scene image.
In an optional implementation manner, the quantity detection module is further configured to, after obtaining the face images and the number thereof included in the scene image, determine whether the number of the face images is greater than a preset threshold;
the verification module is used for sequentially verifying the face images in batches when the number of the face images is greater than the preset threshold, wherein the number of the face images to be verified in each batch is not greater than the preset threshold;
the feature extraction module is used for extracting the face feature information of all the face images in the scene image when the number of the face images is not more than the preset threshold value.
In an optional implementation manner, the multiple face recognition monitoring apparatus further includes a living body detection module, configured to:
before the feature extraction module extracts the face feature information of each face image in the at least one face image, performing boundary detection on each face image in the at least one face image to obtain a face image meeting a preset condition;
performing living body detection on the face image meeting the preset condition to obtain a face image which passes the living body detection;
the feature extraction module is used for extracting the face feature information of the face image passing the living body detection.
In an optional implementation, the verification module is configured to:
in response to the condition that the verification result is that the verification fails, repeatedly executing the step of verifying the face feature information of the face image and the face feature information in the face feature set, and counting the repeated execution times;
and in response to the condition that the execution times reach preset times, judging the user corresponding to the face image as a strange visitor and sending prompt information.
In an optional implementation manner, the multiple face recognition monitoring apparatus further includes a display module, configured to display the user information and the verification result after the obtaining module obtains the user information corresponding to the face feature information of each face image.
In an optional implementation manner, the multiple face recognition monitoring apparatuses further include a transmission module, configured to send the passage record of the user information to a server.
In an optional implementation manner, the multiple face recognition monitoring apparatus further includes an information generating module, where the transmission module is further configured to receive user data information, where the user data information includes template face feature information;
and the information generation module is used for generating and storing the user information according to the user data information and storing the template human face feature information in the human face feature set.
A third aspect of embodiments of the present application provides an electronic device, comprising a processor and a memory, the memory being configured to store one or more programs configured to be executed by the processor, the programs including instructions for performing some or all of the steps as described in any of the methods of the first aspect of embodiments of the present application.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium for storing a computer program, wherein the computer program is configured to make a computer perform part or all of the steps as described in any one of the methods of the first aspect of embodiments of the present application.
According to the embodiment of the application, the scene image is obtained, the face feature information of each face image in the at least one face image is extracted in response to the situation that the scene image comprises the at least one face image, the face feature information of each face image is verified with the template face feature information in the face feature set, the verification result is obtained, the user information corresponding to the face feature information of each face image is obtained in response to the situation that the verification result is verified, the passage record of the at least one user information is generated and stored, identity recognition can be carried out on a plurality of persons in the scene, particularly, series of operations such as attendance checking, monitoring and the like can be rapidly carried out in a multi-person passage, and the recognition efficiency and the monitoring safety are improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
Fig. 1 is a schematic flowchart of a multi-face recognition monitoring method disclosed in an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram of another multi-face recognition monitoring method disclosed in the embodiments of the present application;
fig. 3 is a schematic structural diagram of a multi-face recognition monitoring apparatus disclosed in an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Embodiments of the present application relate to electronic devices that may be end devices and may include, but are not limited to, other portable devices such as multiple face recognition monitoring devices for attendance management, multiple face recognition monitoring devices for environmental monitoring, mobile phones with touch sensitive surfaces (e.g., touch screen displays and/or touch pads), laptop or tablet computers. It should also be understood that in some embodiments, the terminal device is not a portable communication device, but may be a desktop computer having a touch-sensitive surface (e.g., a touch screen display and/or a touchpad).
The following describes embodiments of the present application in detail.
Referring to fig. 1, fig. 1 is a schematic flow chart of a multi-face recognition monitoring method disclosed in an embodiment of the present application, and as shown in fig. 1, the multi-face recognition monitoring method includes the following steps:
101. the method comprises the steps of obtaining a scene image, and extracting face feature information of each face image in at least one face image in response to the situation that the scene image contains at least one face image.
The scene image is an image acquired by image acquisition equipment such as a camera, a digital camera and a portrait snapshot machine, and the surrounding environment can be subjected to image acquisition by at least one camera to obtain the scene image, so that the scene image can be understood as starting snapshot service for people who come and go, is particularly suitable for a multi-person channel environment, and the value of the scene image can be greatly exerted. The multiple face recognition monitoring devices in the embodiment of the application may include a face recognition system function, and may determine whether the scene image includes a face image through a face recognition technology, or the multiple face recognition monitoring devices may perform face tracking in the scene.
In customs, airports, banks, video teleconferences, etc., it may be desirable to track specific human face targets. Obviously, to track a face in an image, the face is first identified. The face recognition is to analyze a static picture or a video sequence by using a computer, find out a face and output effective information such as the number, the position, the size and the like of the face; secondly, the face is tracked, namely, on the premise that the face is detected, information such as the position, the size and the like of the face is continuously captured in subsequent frames, namely, the subsequent steps are carried out. Optionally, step 101 may be performed periodically, for example, the multiple face recognition monitoring devices may acquire a scene image every 2 seconds, and perform the above determination process.
If the scene image does not contain the face image, the scene image can be deleted, or the scene image can be deleted after being stored for a period of time, so that the storage space is saved.
Optionally, the multiple face recognition monitoring device includes a device capable of performing remote communication, for example, an electronic device having a communication function of a server, a terminal device such as a computer, and the like, and particularly may be an attendance terminal device capable of performing network communication. The execution subject in the embodiment of the present application may be the above-mentioned multiple face recognition monitoring apparatus. Other user terminal devices may also communicate with the multiple face recognition monitoring devices remotely through a network, and a specific manner of the remote communication may be that a user may perform remote interaction with the multiple face recognition monitoring devices through an Application (APP) installed on a terminal device (e.g., a mobile phone).
If the scene image comprises at least one face image, the multiple face recognition monitoring devices can extract face feature information of the face image, the step can be understood as positioning and extracting face organ features, texture areas and predefined feature points in the face image, and the identity of the user can be better determined through the extracted face feature information. In the embodiment of the application, at least two face images can be synchronously processed, so that the processing efficiency is improved.
Optionally, the face image may be preprocessed: the image preprocessing for the human face is a process of processing the image based on the human face detection result and finally serving for feature extraction. The acquired face image is limited by various conditions and random interference, so that the face image cannot be directly used, and the face image needs to be subjected to image preprocessing such as gray correction, noise filtering and the like in the early stage of image processing. For the face image, the preprocessing process mainly includes light compensation, gray level transformation, histogram equalization, normalization, geometric correction, filtering, sharpening, and the like of the face image.
Features that can be used by a face recognition system are generally classified into visual features, pixel statistical features, face image transform coefficient features, face image algebraic features, and the like. The face feature extraction is performed on some features of the face. Face feature extraction, also known as face characterization, is a process of feature modeling for a face. The methods for extracting human face features are classified into two main categories: one is a knowledge-based characterization method; the other is a characterization method based on algebraic features or statistical learning.
The knowledge-based characterization method mainly obtains feature data which is helpful for face classification according to shape description of face organs and distance characteristics between the face organs, and feature components of the feature data generally comprise Euclidean distance, curvature, angle and the like between feature points. The human face is composed of parts such as eyes, nose, mouth, and chin, and geometric description of the parts and their structural relationship can be used as important features for recognizing the human face, and these features are called geometric features. The knowledge-based face characterization mainly comprises a geometric feature-based method and a template matching method.
Specifically, the feature extraction method based on the still image may use a global method or a local method. The facial expression is embodied by the movement of muscles, and the static image of the facial expression visually displays the change of facial shapes and textures generated by the movement of the muscles of the face when the expression occurs. On the whole, the change causes obvious deformation of facial organs, which can affect the global information of the facial image, so that a facial expression recognition algorithm considering expression characteristics from the whole perspective appears.
The classical algorithms in the ensemble method include Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Linear Discriminant Analysis (LDA).
The facial expression on the static image not only changes integrally, but also changes locally. The information contained in local deformation such as texture and wrinkles of facial muscles is helpful for accurately judging the attribute of the expression. The classical methods of the local method are the Gabor wavelet method and the LBP operator method.
Optionally, the face image feature extraction process in the dynamic scene image may also be implemented by a dynamic image-based feature extraction method. Moving images differ from still images in that: the dynamic image reflects the process of occurrence of the facial expression. The expressive features of the dynamic image are therefore mainly manifested in the continuous deformation of the face and the muscular movement of different areas of the face. The dynamic image-based feature extraction method may include an optical flow method, a model method, and a geometric method.
Preferably, the embodiment of the application can refine the face feature model through a deep learning technique to execute the step 101 and the step 102, so as to quickly respond to an application scenario with multiple face recognition requirements.
The concept of deep learning in the embodiments of the present application stems from the study of artificial neural networks. A multi-layer perceptron with multiple hidden layers is a deep learning structure. Deep learning forms a more abstract class or feature of high-level representation properties by combining low-level features to discover a distributed feature representation of the data.
Deep learning is a method based on characterization learning of data in machine learning. An observation (e.g., an image) may be represented using a number of ways, such as a vector of intensity values for each pixel, or more abstractly as a series of edges, a specially shaped region, etc. Tasks (e.g., face recognition or facial expression recognition) are more easily learned from the examples using some specific representation methods. The benefit of deep learning is to replace the manual feature acquisition with unsupervised or semi-supervised feature learning and hierarchical feature extraction efficient algorithms. Deep learning is a new field in machine learning research, and its motivation is to create and simulate a neural network for human brain to analyze and learn, which simulates the mechanism of human brain to interpret data such as images, sounds and texts.
Like the machine learning method, the deep machine learning method also has a classification of supervised learning and unsupervised learning. The learning models built under different learning frameworks are very different. For example, a Convolutional Neural Network (CNN) is a machine learning model under Deep supervised learning, which may also be referred to as a network structure model based on Deep learning, and a Deep Belief Network (DBN) is a machine learning model under unsupervised learning.
After the above-mentioned facial feature information is extracted, step 102 may be performed.
102. And verifying the face feature information of each face image and the template face feature information in the face feature set to obtain a verification result.
Specifically, the multiple face recognition monitoring device may store the face feature set, where the face feature set includes at least one template face feature information, and the template face feature information may be obtained in a similar manner as in step 101, that is, the multiple face recognition monitoring device may register face feature information of a user as template face feature information for storage, and in the registration process, the multiple face recognition monitoring device may also store user information (which may include basic information such as a name and a work number of the user) of the user together with the template face feature information. Optionally, the face feature set may be stored locally or in a server, and when the verification is required, the face feature set may be acquired from the server. When the face recognition is performed, the multiple face recognition monitoring devices may compare the face feature information with template face feature information one by one to determine whether face feature information matched with the face feature information exists in the face feature set, which may be referred to as target face feature information, so as to determine whether the extracted face feature information passes verification, that is, to obtain a verification result, where the presence of the face feature information matched with the face feature information may be understood as that the similarity between the two sets of face feature information is greater than a preset similarity threshold. Specifically, the multiple face recognition monitoring device may store the preset similarity threshold, for example, 95% or 90%, and may pass the verification by calculating a similarity between two sets of face feature information (the face feature information and the template face feature information in the face feature set), where the similarity is greater than the preset similarity threshold, and if the similarity is not greater than the preset similarity threshold, the verification is failed, that is, if the similarity between the target face feature information and the face feature information is greater than the preset threshold, the verification is passed, and step 103 may be executed.
103. And acquiring user information corresponding to the facial feature information of each facial image in response to the condition that the verification result is that the verification is passed.
If the face feature information matched with the face feature information exists in the face feature set, the face identification passes this time, as in step 102. When the verification result is that the verification is passed, the multiple face recognition monitoring devices can acquire user information corresponding to the face feature information of each face image. The user information can comprise basic information such as names and work numbers of users, and the user information and template face feature information uploaded to a database or a cloud end have a corresponding relation, namely when each piece of user information is stored, the multiple face recognition monitoring devices can collect face images of the users, extract the face feature information to serve as the template face feature information to be stored, and the face feature information is used for the follow-up face recognition and identity verification process.
Optionally, after the user performs the user information registration, if the user information does not have face feature information corresponding to the user information, prompt information indicating that the personal registration information is missing may be output, for example, information that the face template is not successfully entered is issued to the corresponding user (the user information may include a contact information of the user).
Optionally, the method further includes:
receiving user data information, wherein the user data information comprises template face feature information;
and generating and storing the user information according to the user data information, and storing the template face feature information in the face feature set.
The multiple face recognition monitoring devices can perform batch import and upload of user data, that is, the multiple face recognition monitoring devices can pre-store user information to be stored (user registration). The multi-face recognition monitoring device can be communicated with other terminal equipment such as a mobile phone, a user can register a user account in the terminal equipment, the terminal equipment is used for collecting a face image of the user, face characteristic information is generated to serve as template face characteristic information, the user data information (including the template face characteristic information) is sent to the multi-face recognition monitoring device, the multi-face recognition monitoring device receives the user data information, after identity verification is carried out, user information of the user can be generated and stored according to the user data information, meanwhile, the template face characteristic information of the user is stored in a face characteristic information set and serves as a basis of the face recognition verification.
After the verification is successful, after the user information corresponding to the facial feature information of each facial image is obtained, step 104 may be executed.
104. And generating and storing at least one pass record of the user information.
The verification result may include that the verification is passed or not passed, and after the verification is passed, the passing record of the user information may be understood as that the user identity is confirmed and the face recognition verification is passed, and the record that the verification is passed may be stored in the multiple face recognition monitoring devices. The passage record can be locally stored in the multiple face recognition devices, and the multiple face recognition devices can also send the passage record to the server and store the passage record in the server. Specifically, the user information of each user may correspond to a record table, and when the multiple face recognition monitoring devices pass the face verification of a certain user, a pass record may be created and stored, where the pass record is used to indicate a record that the user is monitored and recognized, and the pass record may include an attendance record or a monitoring record according to different application scenarios. The passing record may include the user face image collected this time and the passing time at which the image was collected.
Optionally, after step 103, the method further includes: and displaying the user information and the verification result.
Specifically, the multiple face recognition monitoring devices may further output the user information and the verification result, for example, the user information and the verification result are displayed in a display interface, so that the user can quickly obtain feedback information to intuitively know whether the verification is passed through the face recognition, for example, in an attendance scene, the user information that is passed through the verification and the user information that is not passed through the verification can be quickly displayed, and an attendance verification function can be quickly realized.
For example, in the attendance application, the user a appears in the attendance monitoring area of the multi-face recognition monitoring device at 8 o 'clock 20, the multi-face recognition monitoring device collects the scene image containing the face of the user a at this moment, after the multi-face recognition monitoring device performs the processing procedures of steps 102 to 103, the target face feature information matched with the face feature information is found in the face feature set of the multi-face recognition monitoring device, that is, the face recognition of the user a passes the verification, the user ID (user information) corresponding to the face image of the user a can be obtained, the attendance result is recorded under the user ID, for example, the attendance result is recorded at 8 o' clock 20, the user is successfully checked, that is, the attendance is recorded and stored for the time, and meanwhile, the user information of the user a (which may include the face image of the user a, the name, the attendance of the user a) is displayed in the display interface, Work number, etc.) and the verification result is that the verification is passed, which indicates that the attendance checking of A is successful.
The method and the device for identifying the multiple persons in the scene can achieve the effect of monitoring or attendance checking by acquiring the scene image, responding to the condition that the scene image comprises at least one face image, extracting the face characteristic information of each face image in the at least one face image, verifying the face characteristic information of each face image and the template face characteristic information in the face characteristic set to acquire the verification result, responding to the condition that the verification result passes the verification to acquire the user information corresponding to the face characteristic information of each face image, generating and storing the passing record of the user information, identifying multiple persons in the scene, particularly rapidly implementing series operations such as attendance checking, monitoring and the like aiming at multiple flow channels, and improving the identification efficiency and the monitoring safety.
Referring to fig. 2, fig. 2 is a schematic flow chart of another multi-face recognition monitoring method disclosed in the embodiment of the present application, and an execution subject in the embodiment of the present application may be the multi-face recognition monitoring apparatus. As shown in fig. 2, the multi-face recognition monitoring method includes the following steps:
201. and acquiring a scene image, and performing face detection on the scene image to obtain face images and the number of the face images contained in the scene image.
Specifically, the image processing process in the embodiment of the present application may be implemented by means of OpenCV, where OpenCV is a cross-platform computer vision library issued based on BSD license (open source), and may be run on Linux, Windows, Android, and Mac OS operating systems. The method is light and efficient, is composed of a series of C functions and a small number of C + + classes, provides interfaces of languages such as Python, Ruby, MATLAB and the like, and realizes a plurality of general algorithms in the aspects of image processing and computer vision.
The multiple face recognition monitoring device may start a video stream to obtain the scene image (frame buffer), calculate a motion gradient direction image (cv _ orientation) according to a detection device direction (device orientation) and a photographing position, and further obtain the face tracking number by using the data as a tracking parameter.
If the scene image includes a face image, the face detection may be performed by the above method to obtain the face images and the number thereof included in the scene image, and then step 202 is performed.
The steps of obtaining the scene image and obtaining the face image may refer to specific descriptions in step 101 in the embodiment shown in fig. 1, and are not described herein again.
202. And judging whether the number of the face images is greater than a preset threshold value.
The multi-face recognition monitoring apparatus may store the preset threshold for limiting the number of face tracking and recognition processes, for example, preferably, the preset threshold may be 5 based on the upper limit of the device usage performance, that is, the maximum supports 5 faces. After the multiple face recognition monitoring devices perform face detection to obtain face images and the number of the face images, whether the number of the face images is greater than the preset threshold value or not can be judged, and if so, step 203 can be executed; if not, step 204 may be performed.
203. And sequentially verifying the face images in batches, wherein the number of the face images to be verified in each batch is not more than the preset threshold value.
The face feature information of the face image with the preset threshold value in the scene image can be extracted firstly, then verification is carried out, the rest face images are extracted for verification after verification is finished, and in the verification process of each batch of face images, the face images with the preset threshold value are contained at most. The verification process of each group of facial images is similar to steps 205 to 207, and reference may be made to the detailed description in steps 101 to 104 in the embodiment shown in fig. 1, which is not repeated herein.
Optionally, before extracting the facial feature information of the face image, step 203 may further include:
carrying out boundary detection on each face image in the at least one face image to obtain a face image meeting a preset condition;
performing living body detection on the face image meeting the preset conditions to obtain a face image which passes the living body detection;
and extracting the face characteristic information of the face image passing the living body detection.
Where a boundary refers to certain specific cases that are slightly above and slightly below its boundary value relative to the input equivalence class and the output equivalence class. The boundary-based method is realized according to a defined domain, and is finally evolved into four technologies of boundary value analysis, robustness testing, worst case testing and robust worst case testing, wherein the boundary value analysis is also a black box testing method and is a supplement to an equivalence class analysis method, and long-term testing work experience shows that a large number of errors occur on input or output boundaries. Therefore, the test case is designed according to various boundary conditions, and more errors can be found out.
The boundary value judgment in the embodiment of the application is to filter out a corresponding face, that is, to obtain the first face image with high definition quality.
The living body detection in the embodiment of the application is to prevent the system from being attacked or deceived by using specific means such as photos, videos and human face models. In a biometric system, in order to prevent a malicious person from forging and stealing the biometric characteristics of another person for identity authentication, the biometric system needs to have a liveness detection function, i.e., to determine whether the submitted biometric characteristics are from a living individual. The living body detection technology generally utilizes physiological characteristics of people, for example, living body fingerprint detection can be based on information such as temperature, perspiration and electric conductivity of fingers, living body face detection can be based on information such as head movement, respiration and red-eye effect, and living body iris detection can be based on iris flutter characteristics, motion information of eyelashes and eyelids, contraction and expansion response characteristics of pupils to visible light source intensity, and the like.
The current living body detection technology of the face recognition technology can adopt a mode of matching instruction actions, such as left turning, right turning, mouth opening, blinking and the like of a face, the false matching of instructions is considered as forgery or deception, and effective recognition features can be extracted from a face image by utilizing a deep neural network and a computer technology to judge the identity. The monitoring safety can be improved through the living body detection.
Based on the consideration of the upper limit of the service performance of the equipment, the multi-face recognition monitoring device can process the face images with the preset threshold value at one time, namely, the number limit of face image processing at each time is set. After all face images are verified in batches in sequence, step 207 may be performed.
204. And extracting the face characteristic information of all face images in the scene image.
If the number of the face images contained in the scene image is not greater than the preset threshold value, the multiple face recognition monitoring devices can extract the face feature information of all the face images in the scene image.
205. And verifying the face feature information of each face image and the template face feature information in the face feature set to obtain a verification result.
Step 206 may be performed in response to the verification result being verification pass, and step 208 may be performed in response to the verification result being verification fail.
206. And acquiring user information corresponding to the facial feature information of each facial image in response to the condition that the verification result is that the verification is passed.
After the user information is acquired, step 207 may be executed.
207. And generating and storing the pass record of the user information.
The above steps 204 to 207 may refer to the detailed description in steps 101 to 104 in the embodiment shown in fig. 1, and are not repeated here.
Wherein, the pass record comprises the acquisition time of the face image. If there is a match between the target face feature information and the face feature information, step 205 may be executed. The pass record can be synchronously output, and it can be understood that when the pass record passes the verification, prompt information that the verification passes can be output, the prompt information can be voice information, image information or character information, wherein the image information can include the face image identified this time. Because many face identification monitoring devices can be used to attendance, above-mentioned control record can be attendance record, includes user information and user attendance time (above-mentioned acquisition time) promptly.
The user information may include a user ID, and the multiple face recognition monitoring device may register the user ID acquired when the user logs in the push application as an identifier in the background service. The target identity information may be the user ID. The multi-face recognition monitoring device may be an attendance device, and stores attendance data of at least one user, where the attendance data refers to related data recorded during attendance of the user, and may include attendance time for attendance on/off duty, basic information of the user, and the like, and the attendance data may correspond to identity information of the user, for example, an attendance record of the user corresponding to the ID may be found by using the user ID. The multi-face recognition monitoring device can establish and store the attendance records, the attendance records can be in one-to-one correspondence with the user ID and are convenient to search, and optionally, the attendance records can also be sent to other terminal equipment used by the user so that the user can check the personal attendance condition.
In the attendance application, the multiple face recognition monitoring devices can synchronize the result to the cloud background so as to persistently store the attendance result.
Optionally, the multiple face recognition monitoring devices may send the passage record of the user information to the server.
The multiple face recognition monitoring devices may send the passage record to the server, and further optionally, may send the passage record of the user information to the server at a predetermined time. The multi-face recognition monitoring device can store the preset time, the pass record (for example, when 23 of each working day) is sent to the server at the preset time, the server can store the pass record, the user can log in the user account of the user through the user terminal device, the pass record of the user can be obtained from the server, and when the server receives a request instruction for obtaining the pass record from the user terminal device, the pass record corresponding to the user account can be sent to the user terminal device in a push message mode, so that the user can check the attendance state of the user in the attendance application.
208. And in response to the condition that the verification result is that the verification fails, repeatedly executing the step of verifying the face feature information of the face image and the face feature information in the face feature set, and counting the repeated execution times.
If the verification result is that the verification fails, the multiple face recognition monitoring device may repeatedly execute step 205, and count the number of times of repeatedly executing step 205, so as to make the verification result more accurate and avoid the situation of recognition error. If the execution count does not reach the preset count, if the verification result is that the verification is passed, step 206 may be executed, and if the execution count reaches the preset count, step 209 may be executed.
209. And in response to the condition that the execution times reach the preset times, judging the user corresponding to the face image as a strange visitor and sending prompt information.
When the execution times reach the preset times, the verification of the preset times is not passed, and the current face image cannot be recognized, so that the user corresponding to the face image can be judged as a strange visitor, a record can be generated and stored in the multi-face recognition monitoring device, and prompt information can be sent, wherein the prompt information is used for prompting that the verification fails, the recognized object is the strange visitor, the prompt information can comprise character information displayed on a display screen of the device (the face image of the strange visitor can also be displayed) or voice information played through a loudspeaker, and the strange visitor can be prompted to be recognized to draw attention.
For example, the preset number of times may be 3, that is, the multiple face recognition monitoring device may perform 3 judgments on one face feature information, calculate a similarity between the face feature information and template face feature information in the face feature set, and determine a verification result by judging a similarity between two sets of face feature information, as in the embodiment shown in fig. 1, the multiple face recognition monitoring device may store the preset similarity threshold, and when the similarity is greater than the preset similarity threshold, the verification result is that the verification is passed, and if the similarity is not greater than the preset similarity threshold, the verification is not passed. The preset similarity threshold may be set and modified by the user. Aiming at the verification result of one-time verification, if the verification fails, sending prompt information of verification failure when the verification fails; if the results of the 3 times of verification are that the verification fails, the step of determining the user corresponding to the face feature information as a stranger can be executed. The method can more accurately determine the verification result of the face feature information and reduce the probability of wrong judgment.
Optionally, if it is recognized that the extracted face feature information matches the monitored face feature information, the method further includes:
and marking the pass record of the user information.
The multiple face recognition monitoring devices can store a monitoring face feature information set, which is similar to the face feature set, but different from the face feature set, and is directed to a specific object needing important monitoring. When the acquired face feature information is judged to be matched with the face feature information in the monitoring face feature information set, the key object is determined to be monitored, and the key object can be marked when the pass record is stored or is separately stored in a target record table form. Optionally, corresponding reminding information may be output when the object is monitored or a specific object is monitored within the target monitoring time.
Optionally, the multiple face recognition monitoring devices may further store target monitoring time, that is, the multiple face recognition monitoring devices may set time-division monitoring, for example, may set the multiple face recognition monitoring devices to be turned on at the target monitoring time (for example, 20-23 points per day to friday on monday), or set the target monitoring time as a key monitoring time, and the user captured in the time is a key object of interest, so as to execute a more flexible monitoring scheme. In monitoring, if a face is monitored within the target monitoring time, the monitoring record of the user information corresponding to the face may be marked to indicate the importance level when storing the monitoring record, or the monitoring record of the user information may be stored in a special target recording table, which may be stored separately from a general monitoring recording table, so as to facilitate searching for a monitored object appearing within the important monitoring time.
Through the scheme, the multiple face recognition monitoring devices can realize key monitoring in target monitoring time and monitoring key attention objects, so that the monitoring and attendance flexibility is improved, the searching and analyzing of key monitoring records are facilitated, and the environmental safety is improved.
The embodiment of the application can be realized by adopting an asynchronous thread, and the 1: N comparison verification is quickly finished by utilizing the cache (namely, the extracted face characteristic information is compared with the face characteristic information in the face characteristic set); the step of outputting information in the embodiment of the present application may involve displaying a User Interface (UI), where the UI may be refreshed by the main thread alone without being coupled to the sub-identification thread. That is, in an actual application scenario, at least two multi-face recognition monitoring devices may be used to process respective data, but the processing results may be merged to the server side.
The asynchronous thread is a sub-thread which is created outside a main thread of a program and executed, the execution of the threads is irrelevant, the main thread does not need to wait for the execution of the sub-thread, and the main thread is equivalent to a running match, and people in each track run simultaneously. Because asynchronous operations do not need extra thread burden and are processed in a callback mode, under the condition of good design, a processing function does not need to use shared variables (even if the shared variables cannot be completely used, the number of the shared variables can be reduced at least), and the possibility of deadlock is reduced.
By the method, the data processing speed of the multi-face recognition monitoring can be greatly improved. The method can quickly filter target groups, monitor the trend of floating personnel in quantity, and provide a one-stop personnel security solution for enterprises or public groups, for example, the method is applied to specific public places such as schools, stations, stadiums and the like, and can effectively screen students, station passengers and members of concert, wherein the schools can accurately find out which students belong to early-quit class in the midway and which students belong to active early-arrival class and the like; the scheme can provide accurate attendance checking or monitoring service for specific places such as stations, concerts and the like.
The embodiment of the application is particularly suitable for attendance application scenes, and the scheme can effectively improve the attendance rate of staff from the inside of a company and the use condition of a client, and greatly reduces the difficult problems of slow card punching, no card punching, error punching and the like. Fundamentally has saved a large amount of time, has reduced enterprise's running cost, has promoted staff's work efficiency, provides safer guarantee for the customer.
The embodiment of the application obtains a scene image, performs face detection on the scene image to obtain face images and the number of the face images contained in the scene image, judges whether the number of the face images is greater than a preset threshold value, if so, verifies the face images in batches in sequence, wherein the number of the face images to be verified in each batch is not greater than the preset threshold value, if not, extracts face feature information of all the face images in the scene image, verifies the face feature information of each face image with template face feature information in a face feature set to obtain a verification result, if the verification is passed, can obtain user information corresponding to the face feature information of each face image, generates and stores a passing record of the user information, and if the verification is not passed, repeatedly executes the step of verifying the face feature information of the face images with the face feature information in the face feature set, and counting the repeated execution times, judging the user corresponding to the face image as a strange visitor and sending prompt information when the execution times reach the preset times, so that the monitoring or attendance effect can be achieved, identity recognition can be performed on a plurality of persons in the scene, particularly, series of operations such as attendance checking and monitoring can be rapidly performed on a multi-person flow channel, and the recognition efficiency, the accuracy and the monitoring safety are improved.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the multiple face recognition monitoring device includes hardware structures and/or software modules for performing the above functions. Those of skill in the art would appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the division of the functional units may be performed on the multiple face recognition monitoring devices according to the above method examples, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a multiple face recognition monitoring device according to an embodiment of the present application, and as shown in fig. 3, the multiple face recognition monitoring device 300 includes:
a first obtaining module 310, configured to obtain a scene image;
the feature extraction module 320 is configured to, in response to a situation that at least one face image is included in the scene image, extract face feature information of each face image in the at least one face image;
the verification module 330 is configured to verify the face feature information of each face image with template face feature information in a face feature set to obtain a verification result;
a second obtaining module 340, configured to obtain, in response to that the verification result is that the verification is passed, user information corresponding to the facial feature information of each facial image;
and a recording module 350, configured to generate and store a pass record of the user information.
Optionally, the multiple face recognition monitoring apparatus 300 further includes a quantity detection module 360, configured to perform face detection on the scene image, so as to obtain the face images and the number thereof included in the scene image.
Optionally, the quantity detection module 360 is further configured to, after obtaining the face images and the number thereof included in the scene image, determine whether the number of the face images is greater than a preset threshold;
the verification module 330 is configured to sequentially verify the face images in batches when the number of the face images is greater than the preset threshold, where the number of the face images to be verified in each batch is not greater than the preset threshold;
the feature extraction module 320 is configured to extract face feature information of all face images in the scene image when the number of the face images is not greater than the preset threshold.
Optionally, the multiple face recognition monitoring device 300 further includes a living body detection module 370, configured to:
before the feature extraction module 320 extracts the face feature information of each face image in the at least one face image, performing boundary detection on each face image in the at least one face image to obtain a face image meeting a predetermined condition;
performing living body detection on the face image meeting the preset conditions to obtain a face image which passes the living body detection;
the feature extraction module 320 is configured to extract face feature information of the face image that passes through the live body detection.
Optionally, the verification module 330 is configured to:
in response to the condition that the verification result is that the verification fails, repeatedly executing the step of verifying the face feature information of the face image and the face feature information in the face feature set, and counting the repeated execution times;
and in response to the condition that the execution times reach preset times, judging the user corresponding to the face image as a strange visitor and sending prompt information.
Optionally, the multi-face recognition monitoring apparatus 300 further includes a display module 380, configured to display the user information and the verification result after the obtaining module obtains the user information corresponding to the face feature information of each face image.
Optionally, the multiple face recognition monitoring apparatus 300 further includes a transmission module 390, configured to send the passage record of the user information to a server.
Optionally, the multi-face recognition monitoring apparatus 300 further includes an information generating module 3100, wherein the transmission module 390 is further configured to receive user data information, where the user data information includes template face feature information;
the information generating module 3100 is configured to generate and store the user information according to the user data information, and store the template face feature information in the face feature set.
According to the specific implementation manner of the embodiment of the present application, the steps 101 to 104, 201 to 209 related to the multi-face recognition monitoring method shown in fig. 1 and fig. 2 may be executed by each module in the multi-face recognition monitoring apparatus 300 shown in fig. 3. For example, steps 101 to 104 in fig. 1 may be respectively performed by the first obtaining module 310, the feature extracting module 320, the verifying module 330 and the second obtaining module 340 shown in fig. 3, which are not described in detail again.
Implementing the multi-face recognition monitoring apparatus 300 shown in fig. 3, the multi-face recognition monitoring apparatus 300 may acquire a scene image, in response to a situation that at least one face image is included in the scene image, extracting the face feature information of each face image in the at least one face image, verifying the face feature information of each face image with the template face feature information in the face feature set to obtain a verification result, responding to the condition that the verification result is verified, acquiring user information corresponding to the face feature information of each face image, generating and storing at least one passing record of the user information, identifying the identities of a plurality of persons in a scene, particularly, series of operations such as attendance checking, monitoring and the like can be rapidly implemented aiming at a multi-person flow channel, and the identification efficiency and the monitoring safety are improved.
Referring to fig. 4, fig. 4 is a schematic structural diagram of another electronic device disclosed in the embodiment of the present application. As shown in fig. 4, the electronic device 400 includes a processor 401 and a memory 402, wherein the electronic device 400 may further include a bus 403, the processor 401 and the memory 402 may be connected to each other through the bus 403, and the bus 403 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus 403 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 4, but this does not indicate only one bus or one type of bus. Electronic device 400 may also include input-output device 404, where input-output device 404 may include a display screen, such as a liquid crystal display screen. Memory 402 is used to store one or more programs containing instructions; processor 401 is configured to invoke instructions stored in memory 402 to perform some or all of the method steps described above in the embodiments of fig. 1 or 2.
Implementing the electronic device 400 shown in fig. 4, a scene image may be obtained, in response to a situation that the scene image includes at least one face image, face feature information of each face image in the at least one face image is extracted, the face feature information of each face image is verified with template face feature information in a face feature set, a verification result is obtained, in response to a situation that the verification result is that verification passes, user information corresponding to the face feature information of each face image is obtained, at least one passage record of the user information is generated and stored, identity recognition may be performed on multiple persons in the scene, particularly, a series of operations such as attendance checking, monitoring, and the like may be rapidly performed for a multi-person flow channel, and thus, recognition efficiency and monitoring security are improved.
An embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and the computer program enables a computer to execute some or all of the steps of any one of the multiple face recognition monitoring methods described in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or modules through some interfaces, and may be in an electrical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be module units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a form of hardware or a form of a software functional unit.
The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash memory disks, read-only memory, random access memory, magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (14)
1. A multi-face recognition monitoring method is characterized by comprising the following steps:
acquiring a scene image;
carrying out face detection on the scene image to obtain face images and the number of the face images contained in the scene image;
judging whether the number of the face images is larger than a preset threshold value or not;
if yes, sequentially verifying the face images in batches, wherein the number of the face images to be verified in each batch is not more than the preset threshold value;
if not, extracting the face feature information of all face images in the scene image;
verifying the face feature information of each face image and template face feature information in the face feature set to obtain a verification result;
responding to the condition that the verification result is that the verification is passed, and acquiring user information corresponding to the face feature information of each face image;
and generating and storing at least one pass record of the user information.
2. The multi-face recognition monitoring method according to claim 1, further comprising, before extracting the face feature information of each face image of the at least one face image:
performing boundary detection on each face image in the at least one face image to obtain a face image meeting a preset condition;
performing living body detection on the face image meeting the preset conditions to obtain a face image which passes the living body detection;
and extracting the face characteristic information of the face image passing the living body detection.
3. The multi-face recognition monitoring method according to claim 1, further comprising:
in response to the condition that the verification result is that the verification fails, repeatedly executing the step of verifying the face feature information of the face image and the face feature information in the face feature set, and counting the repeated execution times;
and in response to the condition that the execution times reach preset times, judging the user corresponding to the face image as a strange visitor and sending prompt information.
4. The multi-face recognition monitoring method according to claim 1, further comprising, after the obtaining of the user information corresponding to the face feature information of each face image:
and displaying the user information and the verification result.
5. The multi-face recognition monitoring method according to claim 3 or 4, wherein after generating and storing the traffic record of the user information, the method further comprises:
and sending the passing record of the user information to a server.
6. The multi-face recognition monitoring method according to claim 3, wherein before verifying the face feature information of each face image with the face feature information in the template face feature set, the method further comprises:
receiving user data information, wherein the user data information comprises template face feature information;
and generating and storing the user information according to the user data information, and storing the template face feature information in the face feature set.
7. A multiple face recognition monitoring device, comprising:
the first acquisition module is used for acquiring a scene image;
the quantity detection module is used for carrying out face detection on the scene image to obtain the face images and the number of the face images contained in the scene image;
the quantity detection module is also used for judging whether the number of the face images is greater than a preset threshold value;
the verification module is used for sequentially verifying the face images in batches when the number of the face images is greater than the preset threshold, wherein the number of the face images to be verified in each batch is not greater than the preset threshold;
the feature extraction module is used for extracting face feature information of all face images in the scene image when the number of the face images is not more than the preset threshold value;
the verification module is further used for verifying the face feature information of each face image and the template face feature information in the face feature set to obtain a verification result;
the second acquisition module is used for responding to the condition that the verification result is that the verification is passed, and acquiring user information corresponding to the face feature information of each face image;
and the recording module is used for generating and storing the passage record of the user information.
8. The multi-face recognition monitoring device of claim 7, further comprising a liveness detection module configured to:
before the feature extraction module extracts the face feature information of each face image in at least one face image, carrying out boundary detection on each face image in at least one face image to obtain a face image meeting a preset condition;
performing living body detection on the face image meeting the preset conditions to obtain a face image which passes the living body detection;
the feature extraction module is used for extracting the face feature information of the face image passing the living body detection.
9. The multi-face recognition monitoring device of claim 7, wherein the verification module is configured to:
in response to the condition that the verification result is that the verification fails, repeatedly executing the step of verifying the face feature information of the face image and the face feature information in the face feature set, and counting the repeated execution times;
and in response to the condition that the execution times reach preset times, judging the user corresponding to the face image as a strange visitor and sending prompt information.
10. The multi-face recognition monitoring device according to claim 7, further comprising a display module for displaying the user information and the verification result after the obtaining module obtains the user information corresponding to the face feature information of each face image.
11. The multi-face recognition monitoring device according to claim 9 or 10, further comprising a transmission module for transmitting the passage record of the user information to a server.
12. The multi-face recognition monitoring device of claim 11, further comprising an information generation module, wherein the transmission module is further configured to receive user data information, and the user data information includes template face feature information;
and the information generation module is used for generating and storing the user information according to the user data information and storing the template human face feature information in the human face feature set.
13. An electronic device comprising a processor and a memory for storing one or more programs configured for execution by the processor, the programs comprising instructions for performing the method of any of claims 1-6.
14. A computer-readable storage medium for storing a computer program, wherein the computer program causes a computer to perform the method of any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811097371.0A CN109359548B (en) | 2018-09-19 | 2018-09-19 | Multi-face recognition monitoring method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811097371.0A CN109359548B (en) | 2018-09-19 | 2018-09-19 | Multi-face recognition monitoring method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109359548A CN109359548A (en) | 2019-02-19 |
CN109359548B true CN109359548B (en) | 2022-07-08 |
Family
ID=65351404
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811097371.0A Active CN109359548B (en) | 2018-09-19 | 2018-09-19 | Multi-face recognition monitoring method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109359548B (en) |
Families Citing this family (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109934949A (en) * | 2019-03-12 | 2019-06-25 | 上海商汤智能科技有限公司 | Work attendance method and device, equipment, storage medium |
US12118067B2 (en) | 2019-04-10 | 2024-10-15 | Rakuten Group, Inc. | Authentication system, authentication terminal, user terminal, authentication method, and program |
CN110163092A (en) * | 2019-04-12 | 2019-08-23 | 深圳壹账通智能科技有限公司 | Demographic method, device, equipment and storage medium based on recognition of face |
CN110232323A (en) * | 2019-05-13 | 2019-09-13 | 特斯联(北京)科技有限公司 | A kind of parallel method for quickly identifying of plurality of human faces for crowd and its device |
CN112131915B (en) * | 2019-06-25 | 2023-03-24 | 杭州海康威视数字技术股份有限公司 | Face attendance system, camera and code stream equipment |
CN110427265A (en) * | 2019-07-03 | 2019-11-08 | 平安科技(深圳)有限公司 | Method, apparatus, computer equipment and the storage medium of recognition of face |
CN110363891A (en) * | 2019-07-04 | 2019-10-22 | 华南理工大学 | A kind of intelligent visitor system suitable for more scenes |
CN110458069A (en) * | 2019-08-02 | 2019-11-15 | 深圳市华方信息产业有限公司 | A kind of method and system based on face recognition Added Management user's on-line study state |
CN110490106B (en) * | 2019-08-06 | 2022-05-03 | 万翼科技有限公司 | Information management method and related equipment |
CN110503059B (en) * | 2019-08-27 | 2020-12-01 | 国网电子商务有限公司 | Face recognition method and system |
CN110598602A (en) * | 2019-08-29 | 2019-12-20 | 恒大智慧科技有限公司 | Scenic spot person searching management method and system and storage medium |
CN110910549A (en) * | 2019-11-15 | 2020-03-24 | 江苏高泰软件技术有限公司 | Campus personnel safety management system based on deep learning and face recognition features |
CN111091628A (en) * | 2019-11-20 | 2020-05-01 | 深圳市桑格尔科技股份有限公司 | Face recognition attendance checking equipment with monitoring function |
CN110991316B (en) * | 2019-11-28 | 2023-10-13 | 杭州云栖智慧视通科技有限公司 | Method for automatically acquiring shape and identity information applied to open environment |
CN111160202B (en) * | 2019-12-20 | 2023-09-05 | 万翼科技有限公司 | Identity verification method, device, equipment and storage medium based on AR equipment |
CN111768535A (en) * | 2020-07-08 | 2020-10-13 | 深圳纽酷物联网有限公司 | Dynamic face recognition terminal adopting 5G network communication |
CN112819984B (en) * | 2021-01-13 | 2022-01-18 | 华南理工大学 | Classroom multi-person roll-call sign-in method based on face recognition |
CN112883795B (en) * | 2021-01-19 | 2023-01-31 | 贵州电网有限责任公司 | Rapid and automatic table extraction method based on deep neural network |
CN113096263A (en) * | 2021-03-16 | 2021-07-09 | 普联技术有限公司 | Display method, device and equipment for face card punching and storage medium |
CN113190700B (en) * | 2021-07-02 | 2021-10-08 | 成都旺小宝科技有限公司 | Face snapshot, screening and storage method and system for real estate transaction |
CN113553990B (en) * | 2021-08-09 | 2022-04-15 | 深圳智必选科技有限公司 | Method and device for tracking and identifying multiple faces, computer equipment and storage medium |
CN113674470B (en) * | 2021-08-13 | 2023-05-05 | 大匠智联(深圳)科技有限公司 | Face recognition method of access control system and access control system |
CN113656843B (en) * | 2021-08-18 | 2022-08-12 | 北京百度网讯科技有限公司 | Information verification method, device, equipment and medium |
CN113807303A (en) * | 2021-09-26 | 2021-12-17 | 北京市商汤科技开发有限公司 | Face recognition method and apparatus, medium, and electronic device |
CN114915439A (en) * | 2021-10-27 | 2022-08-16 | 杭州拼便宜网络科技有限公司 | E-commerce platform identity verification method and device, electronic equipment and storage medium |
CN114357419A (en) * | 2022-01-05 | 2022-04-15 | 厦门熵基科技有限公司 | Face verification method and device, storage medium and computer equipment |
CN114495290B (en) * | 2022-02-21 | 2024-06-21 | 平安科技(深圳)有限公司 | Living body detection method, living body detection device, living body detection equipment and storage medium |
CN114550253B (en) * | 2022-02-22 | 2024-05-10 | 支付宝(杭州)信息技术有限公司 | Method and device for preprocessing face image in queuing scene |
CN118042074A (en) * | 2024-01-05 | 2024-05-14 | 广州开得联软件技术有限公司 | Target recognition method, target recognition system, apparatus, device and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104376611A (en) * | 2014-10-20 | 2015-02-25 | 胡昔兵 | Method and device for attendance of persons descending well on basis of face recognition |
CN105741375A (en) * | 2016-01-20 | 2016-07-06 | 华中师范大学 | Large-visual-field binocular vision infrared imagery checking method |
CN105893920A (en) * | 2015-01-26 | 2016-08-24 | 阿里巴巴集团控股有限公司 | Human face vivo detection method and device |
CN106033539A (en) * | 2015-03-20 | 2016-10-19 | 上海宝信软件股份有限公司 | Meeting guiding method and system based on video face recognition |
CN106600732A (en) * | 2016-11-23 | 2017-04-26 | 深圳市能信安科技股份有限公司 | Driver training time keeping system and method based on face recognition |
-
2018
- 2018-09-19 CN CN201811097371.0A patent/CN109359548B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104376611A (en) * | 2014-10-20 | 2015-02-25 | 胡昔兵 | Method and device for attendance of persons descending well on basis of face recognition |
CN105893920A (en) * | 2015-01-26 | 2016-08-24 | 阿里巴巴集团控股有限公司 | Human face vivo detection method and device |
CN106033539A (en) * | 2015-03-20 | 2016-10-19 | 上海宝信软件股份有限公司 | Meeting guiding method and system based on video face recognition |
CN105741375A (en) * | 2016-01-20 | 2016-07-06 | 华中师范大学 | Large-visual-field binocular vision infrared imagery checking method |
CN106600732A (en) * | 2016-11-23 | 2017-04-26 | 深圳市能信安科技股份有限公司 | Driver training time keeping system and method based on face recognition |
Also Published As
Publication number | Publication date |
---|---|
CN109359548A (en) | 2019-02-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109359548B (en) | Multi-face recognition monitoring method and device, electronic equipment and storage medium | |
CN108875833B (en) | Neural network training method, face recognition method and device | |
CN112328999B (en) | Double-recording quality inspection method and device, server and storage medium | |
CN106897658B (en) | Method and device for identifying human face living body | |
WO2021078157A1 (en) | Image processing method and apparatus, electronic device, and storage medium | |
KR102174595B1 (en) | System and method for identifying faces in unconstrained media | |
CN105612533B (en) | Living body detection method, living body detection system, and computer program product | |
US9767349B1 (en) | Learning emotional states using personalized calibration tasks | |
Abd El Meguid et al. | Fully automated recognition of spontaneous facial expressions in videos using random forest classifiers | |
JP7454105B2 (en) | Facial image quality evaluation method and device, computer equipment and computer program | |
CN106850648B (en) | Identity verification method, client and service platform | |
WO2016172872A1 (en) | Method and device for verifying real human face, and computer program product | |
Yu et al. | Is interactional dissynchrony a clue to deception? Insights from automated analysis of nonverbal visual cues | |
CN111310705A (en) | Image recognition method and device, computer equipment and storage medium | |
Jha et al. | An Automated Attendance System Using Facial Detection and Recognition Technology | |
Kumar et al. | Automated Attendance System Based on Face Recognition Using Opencv | |
CN111222374A (en) | Lie detection data processing method and device, computer equipment and storage medium | |
Rosy et al. | An enhanced intelligent attendance management system for smart campus | |
Nahar et al. | Twins and similar faces recognition using geometric and photometric features with transfer learning | |
Boncolmo et al. | Gender Identification Using Keras Model Through Detection of Face | |
US9501710B2 (en) | Systems, methods, and media for identifying object characteristics based on fixation points | |
KR20220017329A (en) | Online Test System using face contour recognition AI to prevent the cheating behaviour by using a front camera of examinee terminal installed audible video recording program and a auxiliary camera and method thereof | |
KR20210136771A (en) | UBT system using face contour recognition AI and method thereof | |
Kadhim et al. | A multimodal biometric database and case study for face recognition based deep learning | |
Muhammad et al. | A generic face detection algorithm in electronic attendance system for educational institute |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |