Nothing Special   »   [go: up one dir, main page]

CN105426882A - Method for rapidly positioning human eyes in human face image - Google Patents

Method for rapidly positioning human eyes in human face image Download PDF

Info

Publication number
CN105426882A
CN105426882A CN201510991486.4A CN201510991486A CN105426882A CN 105426882 A CN105426882 A CN 105426882A CN 201510991486 A CN201510991486 A CN 201510991486A CN 105426882 A CN105426882 A CN 105426882A
Authority
CN
China
Prior art keywords
human eye
sigma
random forest
error
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510991486.4A
Other languages
Chinese (zh)
Other versions
CN105426882B (en
Inventor
马越
贺光辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201510991486.4A priority Critical patent/CN105426882B/en
Publication of CN105426882A publication Critical patent/CN105426882A/en
Application granted granted Critical
Publication of CN105426882B publication Critical patent/CN105426882B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Ophthalmology & Optometry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the field of machine learning, and particularly provides a method for rapidly positioning human eyes in a human face image. Human eye positioning is performed by adopting a random forest algorithm based on a self-adaptive gradient promotion decision tree, and multi-posture and diversified processing is performed on samples so that the method is enabled to have better robustness. Meanwhile, a multistage positioning structure is adopted, i.e. the position of previous positioning acts as a center to narrow searching range so that convergence speed and accuracy can be enhanced. Finally weighted averaging is performed on the result of the random forest so that the coordinate positions of the human eyes are determined and real-time performance is great. Positioning precision is further enhanced, and time consumption is reduced by a low-complexity algorithm based on a binary tree. Learning from a large number of diversified human eye samples is performed based on a CART model to obtain a regression tree so that robustness is greatly enhanced, substantial performance gain is realized and the method can be applied to the field of human eye positioning.

Description

A kind of method of quick position human eye in facial image
Technical field
The invention belongs to computer vision and image processing field, specifically a kind of method of quick position human eye in facial image, is applied in eye recognition field.
Background technology
No matter be interpersonal interchange, or in the process of man-machine interaction, human eye, as one of most important feature of face, have very important effect to the location of human eye.Internet era at the beginning of, the protection of privacy information and its security just start to become hot issue, and iris recognition is as a kind of method higher than finger print safety property, enters the visual field of people.The space encoder of iris recognition enriches, and everyone iris lines is unique and easily distinguish.Moreover, iris in human body can show different proterties under different illumination conditions, therefore only have living body iris can pass through to detect, this has just stopped the similar potential safety hazard copying fingerprint and so on, makes iris recognition become the highest identity recognizing technology of current security.But people wish to obtain security high as far as possible under the Consumer's Experience of close friend.A user-friendly example is exactly recognition of face, and in open occasion (as safety check, gate inhibition), face can be caught by hardware device in unware situation user, therefore can not bring added burden to user.
In field of human-computer interaction, human eye location also has a wide range of applications.Along with the appearance of the wear-type wearable devices such as GoogleGlass, people start to seek the novel interactive mode except keyboard and mouse combination, and the hot topic that the human eye nearest apart from equipment just becomes wherein is alternative.Had at present some in-plant hardware devices to come out, such as PupilLabs company develops the equipment that one adopts the similar glasses of wear-type assembly and some cameras composition, is used for catching the eye movement of user.These human eye tracer techniques can allow people not make to use eyes to control to browse webpage in bimanual situation, player can be assisted to control the personage's motion in game, can also apply in the application scenarioss such as virtual reality.And the oculocentric location of people is the important ring in this kind of human-computer interaction technology.When human eye sight is mobile on screen time, the rotation of eyeball is very small, and therefore to realize the precise controlling such as the clicking operation of web page browsing even on screen, accurate human eye location technology is by indispensable.
Human eye is positioned at other field and is also widely used.Such as the pernicious traffic hazard that fatigue driving causes, the modes such as much research uses mobile unit to observe, and the eyeball of driver moves, eyelid closedown, facial expression, head move judge driver's whether fatigue driving, and take the necessary measures when there is fatigue driving.In recent ten years, human eye location receives the concern of more and more academia and industry member.
The method of many relative maturity has been had at present in human eye location.As: " Anintegrating; transformation-orientedapproachtoconcurrencycontrolandun doingroupeditors " that M.Ressel delivers on Proceedingsofthe1996ACMconferenceonComputersupportedcoop erativework utilizes the geometric object of Hough transformation to rule to identify, locates human eye in the drawings." Automaticadaptivecenterofpupildetectionusingfacedetectio nandcdfanalysis " that M.AsadifardandJ.Shanbezadeh delivers on ProceedingsoftheInternationalmultiConferenceofngineersan dComputerScientists utilizes active shape model (ASM) by face feature point modeling as a whole, utilizes unique point relational implementation location each other.Recent years, along with machine learning algorithm is in the increase of numerous areas depth & wideth, its application in human eye location also gets more and more.As:.According to Haar feature and the performance outstanding in Face datection of Adaboost sorter, Y.Maetal is at AutomaticFaceandGestureRecognition, " Robustpreciseeyelocationunderprobabilisticframework " that 2004.Proceedings.SixthIEEEInternationalConference delivers utilizes similar method human eye location, by the multiple simple sorter of cascade, mistake in sorter before emphasizing constantly trains new sorter, is finally combined into strong classifier.Although these method relative maturity, respectively have no advantages and disadvantages, shortcoming mainly concentrates on, for the method that complexity is not high, its degree of accuracy can not breakthrough bottleneck, and can meet the requirements of method for degree of accuracy, and the too high real-time that makes again of its complexity does not reach requirement.A lot of scenes of human eye location all have higher requirements to real-time and accuracy, and this just needs to find a kind of method that can all do well both ways.
Summary of the invention
In order to solve as above problem, the invention provides a kind of quick position human eye method in facial image.
Technical solution of the present invention is as follows:
A method for quick position human eye in facial image, its feature is, comprises the steps:
Step 1, based on random forest machine learning algorithm, training obtain some decision trees, form random forest by integrated technology;
Step 2, inputting facial image to be measured, and be converted into gray-scale map, is namely two-dimensional matrix;
Step 3, the random forest obtained utilized in step 1, step 2 is obtained to the multistage location structure of use of the two-dimensional matrix of gray-scale value, namely from a fixing human eye coarse positioning region, according to current positioning result, as the center of locating area next time, reduce hunting zone step by step, continuous iteration to the last one-level complete after restoring to normal position result, determine the position coordinates of human eye;
Step 4, will in step 3, there be many decision trees in random forest to obtain multiple people's eye coordinates weighted mean, obtain the result of final human eye location.
Training process in described step 1 based on random forest machine learning algorithm, concrete steps are as follows:
1.1), the face sample gray level image of input is done normalized, the coordinate in the upper left corner of picture, the upper right corner, the lower right corner is respectively (0,0), (1,0), (1,1).
1.2), as " seed " of random forest, variation process is done to sample, on the basis of master sample, carry out randomization, on horizontal ordinate, ordinate direction and dimension of picture, do the random offset in certain limit respectively, in certain angle, do random rotation simultaneously, generate the training sample of multi-pose;
1.3), by 1.2) two-dimensional matrix that obtains, get its most direct eigenwert, the gray scale coordinate figure of pixel in image, as input, the coordinate of human eye is as output, and decision tree is a full binary tree, i.e. grey scale difference I (I between root node and each intermediate node are trained and to be preserved based on difference coordinate 1)-I (I 2) and a threshold value T, our definition is tested (binarytest) to the two-value of image I and is:
b int e s t ( I ; I ( I 1 ) , I ( I 2 ) , T ) = 0 , I ( I 1 ) - I ( I 2 ) < T 1 , o t h e r w i s e
Preferential, this binary tree is decision tree, and we promote decision Tree algorithms by self-adaption gradient and obtain, and concrete steps are as follows:
First, initialization error of fitting function:
F 0 ( x ) = argmin&Sigma; i = 1 N L ( y i , , F ( x i ) )
Wherein, F (x) is decision tree function, for error of fitting;
Further, calculate to the pseudo-residual error of negative gradient direction calculating, namely
y i ~ = - &lsqb; &part; L ( y i , F ( x i ) ) / &part; F ( x i ) &rsqb; F ( x ) = F m - 1 ( x ) , i = 1 ... N
Wherein, we define loss function and are,
L(y i,F(x i))=1/2(y i-F(x i)) 2
Substitution obtains,
y i ~ = y i - F ( x i )
Further, the pseudo-residual error of matching is upgraded:
&alpha; m = arg min &alpha; , &beta; &Sigma; i = 1 N | | y i ~ - &rho; h ( x i ; &alpha; ) | | 2
Wherein, h (x i; α) be fitting result, such as primary fitting result is h (x i; α 1)
Further, weight and regression criterion multiplier is upgraded:
&omega; i = | | y i - F M ( x i ) | | 2 / ( &Sigma; i = 1 N | | y i - F M ( x i ) | | 2 )
&rho; m = arg m i n &Sigma; i = 1 N L ( y i , F m - 1 ( x i ) + &rho; h ( x i ; &alpha; m ) )
Further, Renewal model, m=1 → M, after M time, iteration terminates:
F m(x)=F m-1(x)+γρ mh(x;α m)
Wherein, 1< γ≤0, namely γ is learning rate, and the size of γ determines the speed of convergence of iteration.
1.4), use 1.3) method, n pictures is divided into two classes, is mounted in two, the left and right child node of root node respectively, at ground floor subsequently, the test of with good grounds two-value is divided into four class pictures altogether, by that analogy, cut-off when arriving that suitably tree is dark;
1.5), for 1.4) in classifying quality, simultaneously also can with the aggregation extent estimated, namely the quadratic sum of error characterizes, and the object of therefore training is for minimizing following error:
Q n o d e = &Sigma; p &Element; S l | | p - &Sigma; q &Element; S l q / | | S l | | | | 2 + &Sigma; p &Element; S r | | p - &Sigma; q &Element; S r q / | | S l | | | | 2
Wherein S land S rbe respectively the set of coordinate composition in the left and right child node produced according to certain characteristic sum threshold classification.The condition that training stops is when arriving the degree of depth of specifying tree or specification error size.
Coordinate average of same class sample owing to needing to locate predicting the outcome of picture, so be an estimation to predicting the outcome, so this aggregation extent estimated is more, the confidence level carrying out predicting is also higher, proves that the effect of training is better, wherein, the aggregation extent estimated can represent by variance, but consider the quantity needing the picture ensureing carry on each node, namely ensure the average of classification, therefore the quadratic sum of use error is the most suitable as far as possible.Therefore the target that we obtain training minimizes following error:
Q t r e e = &Sigma; i { &Sigma; p &Element; S l | | p - &Sigma; q &Element; S l q | | S 1 | | | | 2 }
Wherein, i=1,2 ..., 2 d, represent the i-th class, correspond to i-th leaf node, 2 dfor the sum of leaf node, S irepresenting the set that the i-th class is corresponding, is exactly the set of the picture at i-th leaf node.
But in actual applications, binary tree is a nonlinear system inherently, consider that all influence factors are also very difficult and complicated simultaneously.Therefore decision tree training process herein adopts greedy method, determines characteristic parameter on each node and threshold value thereof successively, make formula (1) little from root node according to the degree of depth.Note now training sample not on leaf node, but along with the propelling of the degree of depth, we all using present node as leaf node, still use Q treerepresent center estimated by all samples Euclidean distance quadratic sum to generic.Due to each only training node, so only have the training sample of this node to change.So when training each node, need travel through and select best feature and threshold value, two classes formed after making to be classified have minimum variance, and therefore objective function becomes:
Q t r e e = &Sigma; p &Element; S l | | p - &Sigma; q &Element; S l q | | S l | | | | 2 + &Sigma; p &Element; S r | | p - &Sigma; q &Element; S r q | | S r | | | | 2
Wherein, S lwith S rthe set of the left and right child node represented respectively.
Compared with prior art, the invention has the beneficial effects as follows:
1) the present invention improves the precision of location further, and the low complexity algorithm based on binary tree makes time loss reduce, and the present invention is based on CART model and from a large amount of diversified human eye sample, carries out study obtain regression tree, the good lifting that robustness also obtains.
2) for the situation that single regression tree model station-keeping ability is more weak, multiple weak fix devices that repetition learning is obtained by random forest and these two kinds of integrated technologies of gradient lifting decision tree by the present invention are combined into a strong fix device.
3) according to the applied environment of human eye location, decision tree of the present invention constantly reduces the scope of locating area in position fixing process, and the sample weights introducing positioning result is to improve training process and the forecasting process that random forest and gradient promote decision tree further, finally achieves significant performance gain.
4) the human eye location based on CART model can when only there being low-resolution image with extremely low computing cost, realize the positioning precision and the robustness that are better than all the other models most, therefore the storage space that required cost is just extra is on a small quantity a well selection for fast human-eye location.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the quick position human eye method in facial image of the present invention;
Fig. 2 multi-pose training sample generates sample;
Fig. 3 is the process that gradient promotes that decision tree uses integrated technology human eye location;
The multistage location structure design sketch of Fig. 4.
Fig. 5 mono-decision tree structure.
Embodiment
The measure realized to make the technology of the present invention, creation characteristic, reaching object and effect is easy to understand, below in conjunction with concrete diagram, setting forth the present invention further.
Fig. 1 is the process flow diagram of the method for quick position human eye in facial image of the present invention, comprises the steps:
Step one, based on random forest machine learning algorithm, by training obtain 25 tree be deeply 11 decision tree;
First, as shown in Figure 2, do normalization and variation process to sample, the coordinate in the upper left corner of picture, the upper right corner, the lower right corner is respectively (0,0), (1,0), (1,1), on the basis of master sample, carry out randomization, on horizontal ordinate, ordinate direction and dimension of picture, do the random offset in certain limit respectively, in certain angle, do random rotation simultaneously, generate the training sample of multi-pose;
Promote decision tree by gradient, as shown in Figure 3, gradient during γ=0.4 promotes decision tree, constantly revises the direction of advance thus arrives human eye center more accurately, obtaining better effect.
Obvious single decision tree must be a Weak Classifier, uses integrated technology by these tree composition random forests, forms strong classifier.
Step 2, input face picture, and be converted into gray-scale value two bit matrix;
Step 3, as shown in Figure 5, utilize each decision-tree model in step one, decision tree is a binary tree, and be generally a full binary tree, from the root node of binary tree, each nonleaf node is asked a training in advance and the problem kept, select to proceed to the left son of root node or right son, until proceed to the leaf node of decision tree according to the answer result of this problem.Obtain altogether 25 people's eye coordinateses.
Use 5 grades of location structures, determine the position coordinates of human eye.As shown in Figure 2, the present invention is in order to improve accuracy rate, have employed multistage location structure, also referred to as pyramid structure, precision is improved: from a fixing larger ROI (RegionofInterest) by the scope constantly reducing location, according to the regional center of current ROI positioning result as the less ROI of next stage, then constantly iteration until return the positioning result of afterbody ROI.
Step 4,25 people's eye coordinateses step 3 obtained are averaged, and obtain the result of final human eye location.

Claims (4)

1. the method for quick position human eye in facial image, is characterized in that, comprise the steps:
Step 1, based on random forest machine learning algorithm, training obtain some decision trees, form random forest by integrated technology;
Step 2, inputting facial image to be measured, and be converted into gray-scale map, is namely two-dimensional matrix;
Step 3, the random forest obtained utilized in step 1, step 2 is obtained to the multistage location structure of use of the two-dimensional matrix of gray-scale value, namely from a fixing human eye coarse positioning region, according to current positioning result, as the center of locating area next time, reduce hunting zone step by step, continuous iteration to the last one-level complete after restoring to normal position result, determine the position coordinates of human eye;
Step 4, will in step 3, there be many decision trees in random forest to obtain multiple people's eye coordinates weighted mean, obtain the result of final human eye location.
2. the method for quick position human eye in a kind of facial image according to claim 1, it is characterized in that, the training process in described step 1 based on random forest machine learning algorithm, concrete steps are as follows:
Step 1.1) the face sample gray level image of input is done normalized, obtain standard exercise sample, the coordinate in the upper left corner of face sample gray level image, the upper right corner, the lower right corner is respectively (0,0), (1,0), (1,1);
Step 1.2) variation process is done to standard exercise sample: on horizontal ordinate, ordinate direction and dimension of picture, do the random offset in certain limit respectively, in certain angle, do random rotation simultaneously, generate the training sample of multi-pose;
Step 1.3) according to the two-dimensional matrix of the training sample of multi-pose, get its most direct eigenwert, by the gray scale coordinate figure of pixel in image, as input, the coordinate of human eye is as output, obtain binary tree, i.e. grey scale difference I (I between root node and each intermediate node are trained and to be preserved based on difference coordinate 1)-I (I 2) an and threshold value T, define and (binarytest) is tested to the two-value of image I be:
b int e s t ( I ; I ( I 1 ) , I ( I 2 ) , T ) = 0 , I ( I 1 ) - I ( I 2 ) < T 1 , o t h e r w i s e
Step 1.4) according to binary tree, the n of input is opened multi-pose training sample and is divided into two classes, be mounted in two, the left and right child node of root node respectively, at ground floor subsequently, with good grounds two-value test is divided into four class pictures altogether, by that analogy, arrive specify tree dark time or specification error size time end.
3. the method for quick position human eye in a kind of facial image according to claim 2, it is characterized in that, the training process in described step 1 based on random forest machine learning algorithm, also comprises:
Step 1.5) to step 1.4) training result that obtains carries out error validity, determine whether minimum error, formula is as follows:
Q n o d e = &Sigma; p &Element; S l | | p - &Sigma; q &Element; S l q / | | S l | | | | 2 + &Sigma; p &Element; S r | | p - &Sigma; q &Element; S r q / | | S l | | | | 2
Q in formula nodebe exactly least error, represent center estimated by all samples Euclidean distance quadratic sum to generic, S land S rbe respectively the set being divided into coordinate composition in the left and right child node of two classes produced according to certain characteristic sum threshold classification.
4. the method for quick position human eye in a kind of facial image according to claim 2, is characterized in that, described step 1.3) concrete steps that obtain binary tree are as follows:
Step 1.31) initialization error of fitting function, formula is as follows:
F 0 ( x ) = argmin&Sigma; i = 1 N L ( y i , , F ( x i ) )
Wherein, F (x) is decision tree function, for error of fitting;
Step 1.32) calculate to the pseudo-residual error of negative gradient direction calculating, formula is as follows:
y i ~ = y i - F ( x i )
Step 1.33) upgrade the pseudo-residual error of matching:
&alpha; m = arg min &alpha; , &beta; &Sigma; i = 1 N | | y i ~ - &rho; h ( x i ; &alpha; ) | | 2
Wherein, h (x i; α) be fitting result, primary fitting result is h (x i; α 1
Step 1.34) upgrade weight and regression criterion multiplier:
&omega; i = | | y i - F M ( x i ) | | 2 / ( &Sigma; i = 1 N | | y i - F M ( x i ) | | 2 )
&rho; m = arg m i n &Sigma; i = 1 N L ( y i , F m - 1 ( x i ) + &rho; h ( x i ; &alpha; m ) )
Step 1.34) Renewal model, m=1 → M, after M time, iteration terminates:
F m(x)=F m-1(x)+γρ mh(x;α m)
Wherein, 1< γ≤0, namely γ is learning rate, and the size of γ determines the speed of convergence of iteration.
CN201510991486.4A 2015-12-24 2015-12-24 The method of human eye is quickly positioned in a kind of facial image Expired - Fee Related CN105426882B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510991486.4A CN105426882B (en) 2015-12-24 2015-12-24 The method of human eye is quickly positioned in a kind of facial image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510991486.4A CN105426882B (en) 2015-12-24 2015-12-24 The method of human eye is quickly positioned in a kind of facial image

Publications (2)

Publication Number Publication Date
CN105426882A true CN105426882A (en) 2016-03-23
CN105426882B CN105426882B (en) 2018-11-20

Family

ID=55505081

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510991486.4A Expired - Fee Related CN105426882B (en) 2015-12-24 2015-12-24 The method of human eye is quickly positioned in a kind of facial image

Country Status (1)

Country Link
CN (1) CN105426882B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862285A (en) * 2017-11-07 2018-03-30 哈尔滨工业大学深圳研究生院 A kind of face alignment method
CN108732559A (en) * 2018-03-30 2018-11-02 北京邮电大学 A kind of localization method, device, electronic equipment and readable storage medium storing program for executing
CN109522871A (en) * 2018-12-04 2019-03-26 北京大生在线科技有限公司 A kind of facial contour localization method and system based on random forest
CN111670438A (en) * 2017-12-01 2020-09-15 1Qb信息技术公司 System and method for random optimization of robust inference problem
WO2021159585A1 (en) * 2020-02-10 2021-08-19 北京工业大学 Dioxin emission concentration prediction method
CN114021705A (en) * 2022-01-04 2022-02-08 浙江大华技术股份有限公司 Model accuracy determination method, related device and equipment
CN114529857A (en) * 2022-02-25 2022-05-24 平安科技(深圳)有限公司 User online state identification method, device, server and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020150280A1 (en) * 2000-12-04 2002-10-17 Pingshan Li Face detection under varying rotation
CN103093215A (en) * 2013-02-01 2013-05-08 北京天诚盛业科技有限公司 Eye location method and device
US20150139497A1 (en) * 2012-09-28 2015-05-21 Accenture Global Services Limited Liveness detection
CN104766059A (en) * 2015-04-01 2015-07-08 上海交通大学 Rapid and accurate human eye positioning method and sight estimation method based on human eye positioning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020150280A1 (en) * 2000-12-04 2002-10-17 Pingshan Li Face detection under varying rotation
US20150139497A1 (en) * 2012-09-28 2015-05-21 Accenture Global Services Limited Liveness detection
CN103093215A (en) * 2013-02-01 2013-05-08 北京天诚盛业科技有限公司 Eye location method and device
CN104766059A (en) * 2015-04-01 2015-07-08 上海交通大学 Rapid and accurate human eye positioning method and sight estimation method based on human eye positioning

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862285A (en) * 2017-11-07 2018-03-30 哈尔滨工业大学深圳研究生院 A kind of face alignment method
CN111670438A (en) * 2017-12-01 2020-09-15 1Qb信息技术公司 System and method for random optimization of robust inference problem
CN111670438B (en) * 2017-12-01 2023-12-29 1Qb信息技术公司 System and method for randomly optimizing robust reasoning problem
CN108732559A (en) * 2018-03-30 2018-11-02 北京邮电大学 A kind of localization method, device, electronic equipment and readable storage medium storing program for executing
CN108732559B (en) * 2018-03-30 2021-09-24 北京邮电大学 Positioning method, positioning device, electronic equipment and readable storage medium
CN109522871A (en) * 2018-12-04 2019-03-26 北京大生在线科技有限公司 A kind of facial contour localization method and system based on random forest
CN109522871B (en) * 2018-12-04 2022-07-12 北京大生在线科技有限公司 Face contour positioning method and system based on random forest
WO2021159585A1 (en) * 2020-02-10 2021-08-19 北京工业大学 Dioxin emission concentration prediction method
CN114021705A (en) * 2022-01-04 2022-02-08 浙江大华技术股份有限公司 Model accuracy determination method, related device and equipment
CN114529857A (en) * 2022-02-25 2022-05-24 平安科技(深圳)有限公司 User online state identification method, device, server and storage medium

Also Published As

Publication number Publication date
CN105426882B (en) 2018-11-20

Similar Documents

Publication Publication Date Title
CN105426882A (en) Method for rapidly positioning human eyes in human face image
Ferrer et al. Static signature synthesis: A neuromotor inspired approach for biometrics
CN106682598B (en) Multi-pose face feature point detection method based on cascade regression
CN106295522B (en) A kind of two-stage anti-fraud detection method based on multi-orientation Face and environmental information
Yoon et al. Hand gesture recognition using combined features of location, angle and velocity
CN107895160A (en) Human face detection and tracing device and method
CN107748858A (en) A kind of multi-pose eye locating method based on concatenated convolutional neutral net
CN108229330A (en) Face fusion recognition methods and device, electronic equipment and storage medium
CN108227912A (en) Apparatus control method and device, electronic equipment, computer storage media
CN104463191A (en) Robot visual processing method based on attention mechanism
CN106447625A (en) Facial image series-based attribute identification method and device
CN106469298A (en) Age recognition methodss based on facial image and device
CN106295591A (en) Gender identification method based on facial image and device
CN104517097A (en) Kinect-based moving human body posture recognition method
CN106326857A (en) Gender identification method and gender identification device based on face image
Tan et al. Dynamic hand gesture recognition using motion trajectories and key frames
Jiang et al. Online robust action recognition based on a hierarchical model
CN106203375A (en) A kind of based on face in facial image with the pupil positioning method of human eye detection
CN101853397A (en) Bionic human face detection method based on human visual characteristics
CN105809713A (en) Object tracing method based on online Fisher discrimination mechanism to enhance characteristic selection
Balasuriya et al. Learning platform for visually impaired children through artificial intelligence and computer vision
CN109543629A (en) A kind of blink recognition methods, device, equipment and readable storage medium storing program for executing
Hachaj et al. Real-time recognition of selected karate techniques using GDL approach
Pang et al. Dance video motion recognition based on computer vision and image processing
Li et al. Recognizing hand gestures using the weighted elastic graph matching (WEGM) method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20181120