Nothing Special   »   [go: up one dir, main page]

CN109711384A - A kind of face identification method based on depth convolutional neural networks - Google Patents

A kind of face identification method based on depth convolutional neural networks Download PDF

Info

Publication number
CN109711384A
CN109711384A CN201910017928.3A CN201910017928A CN109711384A CN 109711384 A CN109711384 A CN 109711384A CN 201910017928 A CN201910017928 A CN 201910017928A CN 109711384 A CN109711384 A CN 109711384A
Authority
CN
China
Prior art keywords
face
image
neural network
ellipse
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910017928.3A
Other languages
Chinese (zh)
Inventor
阚伟伟
王佩旭
吴明明
殷雄
吴祚煜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong Nebula Intelligent Technology Co Ltd
Jiangsu Nebula Grid Information Technology Co Ltd
Original Assignee
Nantong Nebula Intelligent Technology Co Ltd
Jiangsu Nebula Grid Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong Nebula Intelligent Technology Co Ltd, Jiangsu Nebula Grid Information Technology Co Ltd filed Critical Nantong Nebula Intelligent Technology Co Ltd
Priority to CN201910017928.3A priority Critical patent/CN109711384A/en
Publication of CN109711384A publication Critical patent/CN109711384A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

Invention broadly provides a kind of face identification methods based on depth convolutional neural networks, including following content: obtaining original facial image using camera;Original facial image is input in MTCNN neural network model, the face face block diagram picture after face face is cut is obtained;The positive face face block diagram picture of face is done into inscribed ellipse, extracts the facial image in ellipse;Facial image in ellipse is input in LCNN neural network model, face characteristic matrix is obtained;Identification is compared with the eigenmatrix of the facial image in face template library in face characteristic matrix and obtains face result.The present invention provides a kind of face identification method based on depth convolutional neural networks, carries out background removal before the defeated depth convolutional neural networks of image, is extracted face part using elliptical form is done, and influence of the environmental factor to image content is discharged.

Description

Face recognition method based on deep convolutional neural network
The technical field is as follows:
the invention relates to an image processing and recognition technology, in particular to a face recognition method based on a deep convolutional neural network.
Background art:
in modern society, personal identification technology is applied everywhere, and identification technology based on human body biological characteristics such as fingerprints, irises, and faces has great market demands in many fields, for example: access control system, video surveillance, airport security, and intelligent space, etc. Although identity authentication based on fingerprints and irises has higher accuracy and reliability than a face recognition technology, the face recognition has wider application prospect due to the advantages of nature, friendliness, less interference to users, easy acceptance by users and the like.
The face recognition is based on digital image processing, computer vision, machine learning and other technologies, and the face images in the database are analyzed and compared by means of the computer processing technology. At present, a method for face recognition mainly uses a deep convolution neural network to complete recognition through repeated convolution operation, and in actual use, the method often cannot be prepared for recognition because an input image is mixed with too much background, because a part of the background is mixed in when the image is cut, the mixed background affects the result of face recognition, and the recognition accuracy is low.
The invention content is as follows:
in order to solve the problems, the invention provides a face recognition method based on a deep convolutional neural network, which removes the background before the image is input into the deep convolutional neural network, extracts the face part in an ellipse mode and eliminates the influence of environmental factors on the image content.
In order to achieve the purpose, the technical scheme of the invention is as follows: a face recognition method based on a deep convolutional neural network comprises the following steps:
step 1), acquiring an original face image by using a camera;
step 2), inputting the original face image into an MTCNN (multiple-input neural network) model to obtain a facial frame image of the face cut by the facial features;
step 3), making an inscribed ellipse of the image of the front face frame of the human face, and extracting the human face image in the ellipse;
step 4), inputting the face image in the ellipse into an LCNN neural network model to obtain a face characteristic matrix;
and step 5), comparing the face feature matrix with the feature matrix of the face image in the face template library, and identifying to obtain a face result.
Preferably, the process of processing the MTCNN neural network model to obtain the image of the facial frame in the step 2) includes:
step 2-1, obtaining a candidate window and a boundary regression by adopting a P-Net network, calibrating the candidate window according to a boundary frame, and removing an overlapped window by utilizing an NMS method;
2-2, training the pictures containing the candidate window determined by the P-Net network in the R-Net network, finely adjusting the candidate frame by using the bounding box vector, and removing the overlapped window by using an NMS method;
and 2-3, removing the candidate window by utilizing an O-Net network, and simultaneously displaying five face key point positions.
Preferably, the training of the MTCNN neural network model comprises three parts, face and non-face classification, bounding box regression and facial feature location point localization, wherein:
the classification of faces and non-faces is determined using a cross entropy loss function:
wherein,the probability of a human face is represented,a real label representing the background is displayed on the display,representing the proximity of "probabilities of predicting a face" and "whether a fact is a face or not",smaller values indicate closer proximity, and the training goal in this section is to obtain a minimum value min ();
The bounding box regression calculated the regression loss by euclidean distance:
wherein,for the background coordinates predicted by the network,for the actual real background coordinates,representing a quadruple consisting of a left upper corner, a right upper corner, a length and a width,the smaller the Euclidean distance representing regression of the frame, the closer the predicted value and the true value are represented, and the minimum value min (in) is obtained as the training target of the part);
The position of the position points of the five sense organs is predicted by calculating the Euclidean distance between the coordinates and the actual coordinates through a network, and the distance is minimized, wherein the formula is as follows:
wherein,for the position coordinates of the five sense organs predicted by the network,for the actual real facial coordinates of the five sense organs,a ten-tuple consisting of five points representing five sense organs,the smaller the Euclidean distance representing regression of the frame, the closer the predicted value and the true value are represented, and the minimum value min (in) is obtained as the training target of the part);
The three parts are integrated to obtain a formula:
wherein: n is the number of training items,representing the weights of the different tasks, in the P-Net network and the R-Net network,=1,=0.5,=0.5, in an O-Net network,=1,=1,=0.5;a real-world tag representing the type of sample,the loss function is represented.
Preferably, after the ellipse is inscribed in the step 3), each pixel point in the face frame image is circulated, whether the pixel point is in the ellipse or not is judged, if so, the pixel point is kept, and if not, the pixel point is ignored, so that the elliptical face image is obtained.
Preferably, the face data information in the face template library in the step 5) is entered in advance, and the ellipse is inscribed during entry to extract the image in the ellipse.
Preferably, in the step 5), the cosine similarity operation is performed on the face feature matrix and the feature matrix of the face image in the face template library to obtain a similarity value between the original image and the image in the face template library.
The face recognition method based on the deep convolutional neural network has the following beneficial effects:
the image of the square human face frame is inscribed with an ellipse, the human face image of the human face in the ellipse is extracted for feature acquisition, the influence of excessive environmental factors in the image on the image content can be effectively eliminated, and the influence of the background of the acquired image on human face identification is avoided.
Description of the drawings:
FIG. 1 is a schematic diagram of a MTCNN neural network cascade architecture according to the present invention;
FIG. 2 is a schematic diagram of extracting an elliptical image from an inscribed ellipse according to the present invention.
The specific implementation mode is as follows:
the technology of the present invention is further described below with reference to the drawings provided by the present invention:
the invention discloses a face recognition method based on a deep convolutional neural network, which comprises the following steps:
step 1), acquiring an original face image by using a camera;
step 2), inputting the original face image into an MTCNN (multiple-input neural network) model to obtain a facial frame image of the face cut by the facial features;
step 3), making an inscribed ellipse of the image of the front face frame of the human face, and extracting the human face image in the ellipse;
step 4), inputting the face image in the ellipse into an LCNN neural network model to obtain a face characteristic matrix;
and step 5), comparing the face feature matrix with the feature matrix of the face image in the face template library, and identifying to obtain a face result.
The following steps are described in detail, wherein the acquisition of the original face image in step 1) may be performed by a smart phone or other smart devices;
as shown in fig. 1, the process of processing the MTCNN neural network model in step 2) to obtain the image of the facial mask includes:
step 2-1, obtaining a candidate window and a boundary regression by adopting a P-Net network, calibrating the candidate window according to a boundary frame, and removing an overlapped window by utilizing an NMS method;
2-2, training the pictures containing the candidate window determined by the P-Net network in the R-Net network, finely adjusting the candidate frame by using the bounding box vector, and removing the overlapped window by using an NMS method;
and 2-3, removing the candidate window by utilizing an O-Net network, and simultaneously displaying five face key point positions.
The training process of the MTCNN neural network model comprises three parts, namely classification of human faces and non-human faces, bounding box regression and location of facial features, wherein:
the classification of faces and non-faces is determined using a cross entropy loss function:
wherein,the probability of a human face is represented,a real label representing the background is displayed on the display,representing the proximity of "probabilities of predicting a face" and "whether a fact is a face or not",smaller values indicate closer proximity, and the training goal in this section is to obtain a minimum value min ();
The bounding box regression calculated the regression loss by euclidean distance:
wherein,for the background coordinates predicted by the network,for the actual real background coordinates,representing a quadruple consisting of a left upper corner, a right upper corner, a length and a width,the smaller the Euclidean distance representing regression of the frame, the closer the predicted value and the true value are represented, and the minimum value min (in) is obtained as the training target of the part);
The position of the position points of the five sense organs is predicted by calculating the Euclidean distance between the coordinates and the actual coordinates through a network, and the distance is minimized, wherein the formula is as follows:
wherein,for the position coordinates of the five sense organs predicted by the network,for the actual real facial coordinates of the five sense organs,a ten-tuple consisting of five points representing five sense organs,the smaller the Euclidean distance representing regression of the frame, the closer the predicted value and the true value are represented, and the minimum value min (in) is obtained as the training target of the part);
The three parts are integrated to obtain a formula:
wherein: n being training itemsThe number of the components is equal to or less than the total number of the components,representing the weights of the different tasks, in the P-Net network and the R-Net network,=1,=0.5,=0.5, in an O-Net network,=1,=1,=0.5;a real-world tag representing the type of sample,the loss function is represented.
The method comprises the steps of obtaining a square face frame image of a face cut by aligning facial features from an original face image through MTCNN processing, wherein a part of background is mixed into the face frame image when the face is cut, and the score of a face recognition result can be influenced in practical application, so that when a face template is generated, the face image is directly made into an oval shape, only the face part is displayed in the template, and the influence of environmental factors on the content of the image is effectively eliminated. Of course, the face images in the face template library are also subjected to ellipse processing by adopting the process during the input, so that the face images in the template library only display the contents of the face part, and the influence of reducing the background on face recognition during later comparison can be avoided.
In a specific step 3), after the face frame image is inscribed with an ellipse, circulating each pixel point in the face front face frame image, and judging whether the pixel point is in the ellipse, if so, keeping, if not, ignoring, and thus obtaining an elliptical face image, wherein the step of inscribed ellipse is as follows:
by taking the top left corner vertex of the original picture as an origin, horizontally establishing an X axis to the right, and vertically establishing a Y axis downwards, through a face detection algorithm, if a face exists in the picture, the coordinates of the top left corner of a face rectangular frame are (X1, Y1), the coordinates of the bottom right corner vertex are (X2, Y2), and any coordinate smaller than 0 is assigned as 0;
calculating the major axis long = (y2-y1)/2 of the ellipse from the four coordinates; short axis short = (x2-x1)/2, while calculating coordinate O (centerX, centerY) of the ellipse center, where: centex = x1+ short, centy = y 1+ long;
whether each pixel point (m, n) is in the ellipse or not is judged by applying the following formula:
if the operation structure is smaller than 1.0, the point is represented to be in the ellipse, the pixel point is reserved, and if not, the pixel point is ignored.
In the step 4), an LCNN neural network model is adopted, which is a lightweight deep convolutional neural network with noise labels and is used for learning compact embedding of large-scale face data with a large number of noise labels, and the model introduces variation of maximum activation (called Maximum Feature Mapping (MFM)) into each convolutional layer of CNN, unlike the maximum output value of an arbitrary convex activation function linearly approximated by using many feature mappings, MFM is implemented by a competitive relationship. The MFM can not only separate noise and information signals, but also can play a role in feature selection between two feature mappings, and the model provides a semantic bootstrap method to enable network prediction to be more consistent with noise labels.
In the step 5), cosine similarity operation is performed on the face characteristic matrix and the characteristic matrix of the face image in the face template library to obtain a similarity value between the original image and the image in the face template library. In the actual use process (such as an attendance system), a threshold value is set, if the obtained similarity value is higher than the threshold value, the similar picture is judged, the attendance is determined to be successful, and if the obtained similarity value is lower than the threshold value, the similar picture is judged to be different pictures, which indicates that the attendance fails.
Therefore, the scope of the present invention should not be limited to the disclosure of the embodiments, but includes various alternatives and modifications without departing from the scope of the present invention, which is defined by the appended claims.

Claims (6)

1. A face recognition method based on a deep convolutional neural network is characterized by comprising the following steps:
step 1), acquiring an original face image by using a camera;
step 2), inputting the original face image into an MTCNN (multiple-input neural network) model to obtain a facial frame image of the face cut by the facial features;
step 3), making an inscribed ellipse of the image of the front face frame of the human face, and extracting the human face image in the ellipse;
step 4), inputting the face image in the ellipse into an LCNN neural network model to obtain a face characteristic matrix;
and step 5), comparing the face feature matrix with the feature matrix of the face image in the face template library, and identifying to obtain a face result.
2. The face recognition method based on the deep convolutional neural network of claim 1, wherein: the process of processing the MTCNN neural network model to obtain the image of the face frame in the step 2) comprises the following steps:
step 2-1, obtaining a candidate window and a boundary regression by adopting a P-Net network, calibrating the candidate window according to a boundary frame, and removing an overlapped window by utilizing an NMS method;
2-2, training the pictures containing the candidate window determined by the P-Net network in the R-Net network, finely adjusting the candidate frame by using the bounding box vector, and removing the overlapped window by using an NMS method;
and 2-3, removing the candidate window by utilizing an O-Net network, and simultaneously displaying five face key point positions.
3. The face recognition method based on the deep convolutional neural network of claim 2, wherein: the training of the MTCNN neural network model comprises three parts, classification of human faces and non-human faces, bounding box regression and location of facial features, wherein:
the classification of faces and non-faces is determined using a cross entropy loss function:
wherein,the probability of a human face is represented,a real label representing the background is displayed on the display,representing the proximity of "probabilities of predicting a face" and "whether a fact is a face or not",smaller values indicate closer proximity, and the training goal in this section is to obtain a minimum value min ();
The bounding box regression calculated the regression loss by euclidean distance:
wherein,for the background coordinates predicted by the network,for the actual real background coordinates,representing a quadruple consisting of a left upper corner, a right upper corner, a length and a width,the smaller the Euclidean distance representing regression of the frame, the closer the predicted value and the true value are represented, and the minimum value min (in) is obtained as the training target of the part);
The position of the position points of the five sense organs is predicted by calculating the Euclidean distance between the coordinates and the actual coordinates through a network, and the distance is minimized, wherein the formula is as follows:
wherein,for the position coordinates of the five sense organs predicted by the network,for the actual real facial coordinates of the five sense organs,a ten-tuple consisting of five points representing five sense organs,the smaller the Euclidean distance representing regression of the frame, the closer the predicted value and the true value are represented, and the minimum value min (in) is obtained as the training target of the part);
The three parts are integrated to obtain a formula:
wherein: n is the number of training items,representing the weights of the different tasks, in the P-Net network and the R-Net network,=1,=0.5,=0.5, in an O-Net network,=1,=1,=0.5;a real-world tag representing the type of sample,the loss function is represented.
4. The face recognition method based on the deep convolutional neural network of claim 1, wherein: and (3) after the ellipse is inscribed in the step 3), circulating each pixel point in the face front face frame image, judging whether the pixel point is in the ellipse or not, if so, keeping, and if not, ignoring, thereby obtaining the elliptical face image.
5. The face recognition method based on the deep convolutional neural network of claim 1, wherein: and 5) inputting the face data information in the face template library in advance in the step 5), and extracting the image in the ellipse by inscribing the ellipse during inputting.
6. The face recognition method based on the deep convolutional neural network of claim 1, wherein: in the step 5), cosine similarity operation is performed on the face characteristic matrix and the characteristic matrix of the face image in the face template library to obtain a similarity value between the original image and the image in the face template library.
CN201910017928.3A 2019-01-09 2019-01-09 A kind of face identification method based on depth convolutional neural networks Pending CN109711384A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910017928.3A CN109711384A (en) 2019-01-09 2019-01-09 A kind of face identification method based on depth convolutional neural networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910017928.3A CN109711384A (en) 2019-01-09 2019-01-09 A kind of face identification method based on depth convolutional neural networks

Publications (1)

Publication Number Publication Date
CN109711384A true CN109711384A (en) 2019-05-03

Family

ID=66260989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910017928.3A Pending CN109711384A (en) 2019-01-09 2019-01-09 A kind of face identification method based on depth convolutional neural networks

Country Status (1)

Country Link
CN (1) CN109711384A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110110673A (en) * 2019-05-10 2019-08-09 杭州电子科技大学 A kind of face identification method based on two-way 2DPCA and cascade feedforward neural network
CN110210393A (en) * 2019-05-31 2019-09-06 百度在线网络技术(北京)有限公司 The detection method and device of facial image
CN110263731A (en) * 2019-06-24 2019-09-20 电子科技大学 A kind of single step face detection system
CN110363091A (en) * 2019-06-18 2019-10-22 广州杰赛科技股份有限公司 Face identification method, device, equipment and storage medium in the case of side face
CN110569756A (en) * 2019-08-26 2019-12-13 长沙理工大学 face recognition model construction method, recognition method, device and storage medium
CN110728807A (en) * 2019-09-27 2020-01-24 深圳市大拿科技有限公司 Anti-dismantling method of intelligent doorbell and related product
CN110738607A (en) * 2019-09-09 2020-01-31 平安国际智慧城市科技股份有限公司 Method, device and equipment for shooting driving license based on artificial intelligence and storage medium
CN113283353A (en) * 2021-05-31 2021-08-20 创芯国际生物科技(广州)有限公司 Organoid cell counting method and system based on microscopic image
CN113742421A (en) * 2021-08-20 2021-12-03 郑州云智信安安全技术有限公司 Network identity authentication method based on distributed storage and image processing
CN113936312A (en) * 2021-10-12 2022-01-14 南京视察者智能科技有限公司 Face recognition base screening method based on deep learning graph convolution network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105260703A (en) * 2015-09-15 2016-01-20 西安邦威电子科技有限公司 Detection method suitable for smoking behavior of driver under multiple postures
SG10201709945UA (en) * 2016-11-30 2018-06-28 Altumview Systems Inc Face detection using small-scale convolutional neural network (cnn) modules for embedded systems
CN108549886A (en) * 2018-06-29 2018-09-18 汉王科技股份有限公司 A kind of human face in-vivo detection method and device
CN108564052A (en) * 2018-04-24 2018-09-21 南京邮电大学 Multi-cam dynamic human face recognition system based on MTCNN and method
CN108875602A (en) * 2018-05-31 2018-11-23 珠海亿智电子科技有限公司 Monitor the face identification method based on deep learning under environment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105260703A (en) * 2015-09-15 2016-01-20 西安邦威电子科技有限公司 Detection method suitable for smoking behavior of driver under multiple postures
SG10201709945UA (en) * 2016-11-30 2018-06-28 Altumview Systems Inc Face detection using small-scale convolutional neural network (cnn) modules for embedded systems
CN108564052A (en) * 2018-04-24 2018-09-21 南京邮电大学 Multi-cam dynamic human face recognition system based on MTCNN and method
CN108875602A (en) * 2018-05-31 2018-11-23 珠海亿智电子科技有限公司 Monitor the face identification method based on deep learning under environment
CN108549886A (en) * 2018-06-29 2018-09-18 汉王科技股份有限公司 A kind of human face in-vivo detection method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈家骏等: "中学几何词典", 中国人民公安大学出版社, pages: 510 - 511 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110110673A (en) * 2019-05-10 2019-08-09 杭州电子科技大学 A kind of face identification method based on two-way 2DPCA and cascade feedforward neural network
CN110210393A (en) * 2019-05-31 2019-09-06 百度在线网络技术(北京)有限公司 The detection method and device of facial image
CN110363091B (en) * 2019-06-18 2021-08-10 广州杰赛科技股份有限公司 Face recognition method, device and equipment under side face condition and storage medium
CN110363091A (en) * 2019-06-18 2019-10-22 广州杰赛科技股份有限公司 Face identification method, device, equipment and storage medium in the case of side face
CN110263731A (en) * 2019-06-24 2019-09-20 电子科技大学 A kind of single step face detection system
CN110569756A (en) * 2019-08-26 2019-12-13 长沙理工大学 face recognition model construction method, recognition method, device and storage medium
CN110569756B (en) * 2019-08-26 2022-03-22 长沙理工大学 Face recognition model construction method, recognition method, device and storage medium
CN110738607A (en) * 2019-09-09 2020-01-31 平安国际智慧城市科技股份有限公司 Method, device and equipment for shooting driving license based on artificial intelligence and storage medium
CN110728807A (en) * 2019-09-27 2020-01-24 深圳市大拿科技有限公司 Anti-dismantling method of intelligent doorbell and related product
CN113283353A (en) * 2021-05-31 2021-08-20 创芯国际生物科技(广州)有限公司 Organoid cell counting method and system based on microscopic image
CN113742421A (en) * 2021-08-20 2021-12-03 郑州云智信安安全技术有限公司 Network identity authentication method based on distributed storage and image processing
CN113742421B (en) * 2021-08-20 2023-09-12 郑州云智信安安全技术有限公司 Network identity authentication method based on distributed storage and image processing
CN113936312A (en) * 2021-10-12 2022-01-14 南京视察者智能科技有限公司 Face recognition base screening method based on deep learning graph convolution network
CN113936312B (en) * 2021-10-12 2024-06-07 南京视察者智能科技有限公司 Face recognition base screening method based on deep learning graph convolution network

Similar Documents

Publication Publication Date Title
CN109711384A (en) A kind of face identification method based on depth convolutional neural networks
Mahmood et al. WHITE STAG model: Wise human interaction tracking and estimation (WHITE) using spatio-temporal and angular-geometric (STAG) descriptors
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
US6661907B2 (en) Face detection in digital images
Ban et al. Face detection based on skin color likelihood
Lin Face detection in complicated backgrounds and different illumination conditions by using YCbCr color space and neural network
Hatem et al. A survey of feature base methods for human face detection
CN109447053A (en) A kind of face identification method based on dual limitation attention neural network model
CN111144366A (en) Strange face clustering method based on joint face quality assessment
Kalas Real time face detection and tracking using OpenCV
Zakaria et al. Hierarchical skin-adaboost-neural network (h-skann) for multi-face detection
Padmapriya et al. Real time smart car lock security system using face detection and recognition
Kheirkhah et al. A hybrid face detection approach in color images with complex background
Paul et al. Extraction of facial feature points using cumulative histogram
CN111767877A (en) Living body detection method based on infrared features
Yadav et al. A novel approach for face detection using hybrid skin color model
Sudhakar et al. Facial identification of twins based on fusion score method
El Maghraby et al. Hybrid face detection system using combination of viola-jones method and skin detection
Jairath et al. Adaptive skin color model to improve video face detection
Singh et al. Template matching for detection & recognition of frontal view of human face through Matlab
Soni et al. A Review of Recent Advances Methodologies for Face Detection
Nasiri et al. Video Surveillance Framework Based on Real-Time Face Mask Detection and Recognition
Chuang et al. Hand posture recognition and tracking based on bag-of-words for human robot interaction
Curran et al. The use of neural networks in real-time face detection
Karungaru et al. Face recognition in colour images using neural networks and genetic algorithms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190503

RJ01 Rejection of invention patent application after publication