CN111160094A - Method and device for identifying hand selection in running snapshot photo - Google Patents
Method and device for identifying hand selection in running snapshot photo Download PDFInfo
- Publication number
- CN111160094A CN111160094A CN201911170965.4A CN201911170965A CN111160094A CN 111160094 A CN111160094 A CN 111160094A CN 201911170965 A CN201911170965 A CN 201911170965A CN 111160094 A CN111160094 A CN 111160094A
- Authority
- CN
- China
- Prior art keywords
- face
- model
- detection model
- snapshot
- face detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000001514 detection method Methods 0.000 claims abstract description 59
- 239000013598 vector Substances 0.000 claims abstract description 51
- 238000012549 training Methods 0.000 claims description 47
- 238000004590 computer program Methods 0.000 claims description 6
- 238000004891 communication Methods 0.000 description 7
- 210000000887 face Anatomy 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- UTTZHZDGHMJDPM-NXCSSKFKSA-N 7-[2-[[(1r,2s)-1-hydroxy-1-phenylpropan-2-yl]amino]ethyl]-1,3-dimethylpurine-2,6-dione;hydrochloride Chemical compound Cl.C1([C@@H](O)[C@@H](NCCN2C=3C(=O)N(C)C(=O)N(C)C=3N=C2)C)=CC=CC=C1 UTTZHZDGHMJDPM-NXCSSKFKSA-N 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 238000007670 refining Methods 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000135 prohibitive effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Library & Information Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention provides a method and a device for identifying a hand selector in a running snapshot, wherein the method comprises the following steps: acquiring retrieval pictures provided by players, inputting the retrieval pictures into a trained face detection model, and extracting face information; inputting the face information into a trained face recognition model, and extracting a feature vector corresponding to the face information; and comparing the characteristic vectors with the face characteristic vectors prestored in a preset face database, and determining the target photos with Euclidean distances between the vectors smaller than a preset threshold value. According to the method and the device for identifying the hand selector in the running snapshot photo, provided by the embodiment of the invention, the face detection and the identification are carried out on the photo of the player shot in the match in real time, and the picture in the face library can be quickly retrieved only by logging in the identity of the player and uploading the photo with the face, so that the labor cost is reduced.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for identifying a hand selector in a running snapshot.
Background
With the progress of society, the running sport is gradually developed from the sport of the young people to the sport of the whole people, and the runner often wants to enjoy the snapshot moment after the race.
Conventionally, after a game, from among collected photographs of a player, a photograph of the player is classified according to a number plate worn by the player on his chest, and then a photograph set of the player is specified. However, due to the fact that the numbers are shielded by clothes or accessories, the number plates fall off, the numbers cannot be shot completely by the angle of a photographer or a camera, and the like, the method provided by the prior art is poor in effect, the photos of players in the match cannot be provided accurately and completely, and a large amount of manual work is still needed for subsequent filing and searching.
Therefore, a new method for identifying players in a snapshot of a run is needed to solve the above problems.
Disclosure of Invention
In order to solve the above problems, embodiments of the present invention provide a method and an apparatus for identifying a hand selector in a snapshot taken during running, which overcome or at least partially solve the above problems.
In a first aspect, an embodiment of the present invention provides a method for identifying a hand selector in a snapshot taken during running, including:
acquiring retrieval pictures provided by players, inputting the retrieval pictures into a trained face detection model, and extracting face information;
inputting the face information into a trained face recognition model, and extracting a feature vector corresponding to the face information;
and comparing the characteristic vectors with the face characteristic vectors prestored in a preset face database, and determining the target photos with Euclidean distances between the vectors smaller than a preset threshold value.
Wherein, prior to the obtaining of the player-provided search photograph, the method further comprises:
and establishing the face detection model, and training the face detection model to obtain the trained face detection model.
Wherein, prior to the obtaining of the player-provided search photograph, the method further comprises:
and establishing the face recognition model, and training the face recognition model to obtain the trained face recognition model.
The establishing of the face detection model and the training of the face detection model to obtain the trained face detection model comprises the following steps:
acquiring a training sample, wherein the training sample comprises a picture with a human face and position information of key points marked in advance by the human face;
and inputting the training sample into the established face detection model, and outputting an accurate face frame and a key point position.
In a second aspect, an embodiment of the present invention further provides a device for identifying a hand selector in a running snapshot, including:
the face detection module is used for acquiring retrieval pictures provided by players, inputting the retrieval pictures into a trained face detection model and extracting face information;
the face recognition module is used for inputting the face information into a trained face recognition model and extracting a feature vector corresponding to the face information;
and the retrieval module is used for comparing the feature vectors with face feature vectors prestored in a preset face database and determining the target photos of which the Euclidean distance between the vectors is smaller than a preset threshold value.
Wherein, select hand recognition device still includes in the snapshot photo of running:
and the first model training module is used for establishing the face detection model and training the face detection model to obtain the trained face detection model.
Wherein, select hand recognition device still includes in the snapshot photo of running:
and the second model training module is used for establishing the face recognition model and training the face recognition model to obtain the trained face recognition model.
Wherein the first model training module is specifically configured to:
acquiring a training sample, wherein the training sample comprises a picture with a human face and position information of key points marked in advance by the human face;
and inputting the training sample into the established face detection model, and outputting an accurate face frame and a key point position.
Third aspect an embodiment of the present invention provides an electronic device, including:
a processor, a memory, a communication interface, and a bus; the processor, the memory and the communication interface complete mutual communication through the bus; the memory stores program instructions executable by the processor, and the processor calls the program instructions to execute the hand-selecting identification method in the running snapshot.
In a fourth aspect, an embodiment of the present invention provides a non-transitory computer-readable storage medium, which stores computer instructions, where the computer instructions cause the computer to execute the method for identifying a hand in a snapshot of a running.
According to the method and the device for identifying the hand selector in the running snapshot photo, provided by the embodiment of the invention, the face detection and the identification are carried out on the photo of the player shot in the match in real time, and the picture in the face library can be quickly retrieved only by logging in the identity of the player and uploading the photo with the face, so that the labor cost is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a method for identifying a selected hand in a snapshot taken during running according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a device for identifying a hand selector in a snapshot taken during running according to an embodiment of the present invention;
fig. 3 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments, but not all embodiments, of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart of a method for identifying a selected hand in a snapshot taken during running according to an embodiment of the present invention, as shown in fig. 1, including:
101. acquiring retrieval pictures provided by players, inputting the retrieval pictures into a trained face detection model, and extracting face information;
102. inputting the face information into a trained face recognition model, and extracting a feature vector corresponding to the face information;
103. and comparing the characteristic vectors with the face characteristic vectors prestored in a preset face database, and determining the target photos with Euclidean distances between the vectors smaller than a preset threshold value.
It should be noted that the execution subject of the embodiment of the present invention is a computer software program pre-stored in the computer device, and the implementation scenario is a running exercise, for example: the marathon process is an environment where manual identification of the player's photographs is inaccurate and cost of manual identification is prohibitive.
Specifically, in step 101, for each player needing face retrieval, the embodiment of the present invention needs to log in with a photo containing a face image of the player, that is, to obtain a retrieval image of the player, and then the embodiment of the present invention inputs the retrieval image into a trained face detection model, where the face detection model is mainly suitable for locating a face in a picture and key points in a face picture. In the embodiment of the invention, all the information is called as face information.
It should be noted that, in the embodiment of the present invention, the extracted face and face key points are corrected according to the disclosed key point standard positions of the eyes, the nose tip, the left corner and the right corner, and preferably, the key points are all mapped to the disclosed key point standard positions through affine transformation. Therefore, the influence of face inclination, pitching and side faces on subsequent face vector extraction can be weakened.
Further, in step 102, the embodiment of the present invention inputs the face information into a trained face recognition model for recognition, where the face recognition model is a feature vector for converting into a face according to the aligned face information.
Finally, in step 103, the embodiment of the present invention inputs the face feature vector corresponding to the retrieved picture into a preset face database for retrieval, and determines the target picture with the euclidean distance between vectors smaller than a preset threshold. It can be understood that the feature vectors of the photos in the database are recorded in the database as the game progresses, and are also obtained through outputting of the trained face detection model and the trained face recognition model. And one or more pictures similar to the search picture can be found, the target pictures are all attributed to the player who carries out the search.
Preferably, the face database provided in the embodiment of the present invention includes two tables, the first table is a photograph recorded by a camera or a shooting device, and the corresponding fields include: photo ID, photo address. The second table is used for recording faces and face vectors contained in the photos, and the corresponding fields are as follows: face ID, face photo address, face photo ID, and face vector. And after searching according to the searched photos, packaging and sending the target pictures corresponding to the searched results to the player.
According to the method and the device for identifying the hand selector in the running snapshot photo, provided by the embodiment of the invention, the face detection and the identification are carried out on the photo of the player shot in the match in real time, and the picture in the face library can be quickly retrieved only by logging in the identity of the player and uploading the photo with the face, so that the labor cost is reduced.
Meanwhile, the face rather than the number plate is used as the basis for judging players, so that special conditions such as number plate shielding can be overcome, and the method has the characteristics of strong robustness and high accuracy in practical application.
On the basis of the above embodiment, before the obtaining of the retrieval picture provided by the player, the method further includes:
and establishing the face detection model, and training the face detection model to obtain the trained face detection model.
The establishing of the face detection model and the training of the face detection model to obtain the trained face detection model comprises the following steps:
acquiring a training sample, wherein the training sample comprises a picture with a human face and position information of key points marked in advance by the human face;
and inputting the training sample into the established face detection model, and outputting an accurate face frame and a key point position.
It can be known from the content of the above embodiment that the embodiment of the present invention needs to train a face detection model to locate the face in the picture and the key points on the face picture.
Specifically, the training samples adopted in the embodiment of the invention for training the face detection model are pictures with faces, and the positions of the faces in the pictures and the positions of 5 key points of the faces, namely, the eyes, the nose tip, the left mouth corner and the right mouth corner. The goal is for the model to learn the mapping from the image to the face locations and keypoint locations.
The face detection model can be preferably divided into four sub-models:
respectively, a proposal network (P-net) which adopts a small amount of convolution kernels and a shallow structure and aims to filter most useless face frames with small calculation overhead; and (4) a refining network (R-net), wherein the positions of the face frames obtained by the P-net are further analyzed by using the R-net, and border regression is carried out. Thereby obtaining a more accurate face frame; o-net, the O-net locates key points of human face according to the accurate human face frame obtained by refining network, and the output is whether the human face, the position of the human face frame and the position of the key points of human face; location network (L-net) to extract more accurate face keypoints, L-net uses 5 branches to predict the location of 5 keypoints, respectively.
Then, through the cascade connection of the 4-layer network, the face detection model can obtain more accurate face frames and key point positions.
On the basis of the above embodiment, before the obtaining of the retrieval picture provided by the player, the method further includes:
and establishing the face recognition model, and training the face recognition model to obtain the trained face recognition model.
It can be known from the content of the above embodiment that the embodiment of the present invention further needs to establish a face recognition model to convert the aligned faces into face vectors.
Specifically, when training the face recognition model, have abundant characteristics and can be used for measuring categorised feature vector, the model need learn can also keep belonging to the characteristic of same person when can distinguish different people, need learn the feature vector that the difference is big between the class promptly, and the difference is little in the class.
The main part of the model employs a 100-layer residual network. Deep networks can extract more robust features. In the training stage, L2 norm normalization is carried out on the features and the weights of the penultimate layer extracted by the network, and arccos is calculated to obtain an included angle theta between the weight W and the feature X. And an included angle margin is added on the basis to monitor the model to obtain more concentrated intra-class features. And multiplying by a scaling value s to compensate for the scale difference caused by normalization of L2 norm, and finally obtaining the classification loss of the network by using softmax. For training data, only face images of n types of people and classification labels corresponding to the face images are needed. In the testing stage of the model, the human face is subjected to the trained model to obtain the final layer of output 512-dimensional features, namely the human face vector converted from the human face image.
Fig. 2 is a schematic structural diagram of a device for identifying a hand selector in a snapshot taken during running according to an embodiment of the present invention, as shown in fig. 2, including: a face detection module 201, a face recognition module 202 and a retrieval module 203, wherein:
the face detection module 201 is used for acquiring retrieval photos provided by players, inputting the retrieval photos into a trained face detection model, and extracting face information;
the face recognition module 202 is configured to input the face information into a trained face recognition model, and extract a feature vector corresponding to the face information;
the retrieval module 203 is configured to compare the feature vectors with face feature vectors pre-stored in a preset face database, and determine a target photo with an euclidean distance between vectors smaller than a preset threshold.
Specifically, how to use the face detection module 201, the face recognition module 202, and the retrieval module 203 to execute the technical scheme of the embodiment of the method for identifying a hand selector in a snapshot taken by running shown in fig. 1 is similar to the implementation principle and the technical effect, and details are not repeated here.
According to the device for identifying the hand selector in the running snapshot photo, provided by the embodiment of the invention, the face detection and identification are carried out on the shot photo of the player in the match in real time, and the picture in the face library can be quickly retrieved only by logging in and uploading the photo with the face by the identity of the player, so that the labor cost is reduced.
On the basis of the above embodiment, the device for identifying a hand selector in a running snapshot further includes:
and the first model training module is used for establishing the face detection model and training the face detection model to obtain the trained face detection model.
On the basis of the above embodiment, the device for identifying a hand selector in a running snapshot further includes:
and the second model training module is used for establishing the face recognition model and training the face recognition model to obtain the trained face recognition model.
On the basis of the foregoing embodiment, the first model training module is specifically configured to:
acquiring a training sample, wherein the training sample comprises a picture with a human face and position information of key points marked in advance by the human face;
and inputting the training sample into the established face detection model, and outputting an accurate face frame and a key point position.
Fig. 3 is a block diagram of an electronic device according to an embodiment of the present invention, and referring to fig. 3, the electronic device includes: a processor (processor)301, a communication Interface (communication Interface)302, a memory (memory)303 and a bus 304, wherein the processor 301, the communication Interface 302 and the memory 303 complete communication with each other through the bus 304. Processor 301 may call logic instructions in memory 303 to perform the following method: acquiring retrieval pictures provided by players, inputting the retrieval pictures into a trained face detection model, and extracting face information; inputting the face information into a trained face recognition model, and extracting a feature vector corresponding to the face information; and comparing the characteristic vectors with the face characteristic vectors prestored in a preset face database, and determining the target photos with Euclidean distances between the vectors smaller than a preset threshold value.
An embodiment of the present invention discloses a computer program product, which includes a computer program stored on a non-transitory computer readable storage medium, the computer program including program instructions, when the program instructions are executed by a computer, the computer can execute the methods provided by the above method embodiments, for example, the method includes: acquiring retrieval pictures provided by players, inputting the retrieval pictures into a trained face detection model, and extracting face information; inputting the face information into a trained face recognition model, and extracting a feature vector corresponding to the face information; and comparing the characteristic vectors with the face characteristic vectors prestored in a preset face database, and determining the target photos with Euclidean distances between the vectors smaller than a preset threshold value.
Embodiments of the present invention provide a non-transitory computer-readable storage medium, which stores computer instructions, where the computer instructions cause the computer to perform the methods provided by the above method embodiments, for example, the methods include: acquiring retrieval pictures provided by players, inputting the retrieval pictures into a trained face detection model, and extracting face information; inputting the face information into a trained face recognition model, and extracting a feature vector corresponding to the face information; and comparing the characteristic vectors with the face characteristic vectors prestored in a preset face database, and determining the target photos with Euclidean distances between the vectors smaller than a preset threshold value.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to each embodiment or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. A method for identifying a hand selector in a running snapshot is characterized by comprising the following steps:
acquiring retrieval pictures provided by players, inputting the retrieval pictures into a trained face detection model, and extracting face information;
inputting the face information into a trained face recognition model, and extracting a feature vector corresponding to the face information;
and comparing the characteristic vectors with the face characteristic vectors prestored in a preset face database, and determining the target photos with Euclidean distances between the vectors smaller than a preset threshold value.
2. The method for identifying a player in a running snapshot as claimed in claim 1, wherein before said obtaining a search photograph provided by a player, said method further comprises:
and establishing the face detection model, and training the face detection model to obtain the trained face detection model.
3. The method for identifying a player in a running snapshot as claimed in claim 2, wherein before said obtaining a search photograph provided by a player, said method further comprises:
and establishing the face recognition model, and training the face recognition model to obtain the trained face recognition model.
4. The method for identifying a hand-selected from a running snapshot according to claim 2, wherein the establishing of the face detection model and the training of the face detection model to obtain the trained face detection model comprises:
acquiring a training sample, wherein the training sample comprises a picture with a human face and position information of key points marked in advance by the human face;
and inputting the training sample into the established face detection model, and outputting an accurate face frame and a key point position.
5. A hand selection recognition device in running snapshot photos is characterized by comprising:
the face detection module is used for acquiring retrieval pictures provided by players, inputting the retrieval pictures into a trained face detection model and extracting face information;
the face recognition module is used for inputting the face information into a trained face recognition model and extracting a feature vector corresponding to the face information;
and the retrieval module is used for comparing the feature vectors with face feature vectors prestored in a preset face database and determining the target photos of which the Euclidean distance between the vectors is smaller than a preset threshold value.
6. The apparatus for recognizing a hand-selected from a snapshot while running according to claim 5, further comprising:
and the first model training module is used for establishing the face detection model and training the face detection model to obtain the trained face detection model.
7. The apparatus for recognizing a hand-selected from a snapshot while running according to claim 6, further comprising:
and the second model training module is used for establishing the face recognition model and training the face recognition model to obtain the trained face recognition model.
8. The device for recognizing a hand-selected from a snapshot of running according to claim 6, wherein the first model training module is specifically configured to:
acquiring a training sample, wherein the training sample comprises a picture with a human face and position information of key points marked in advance by the human face;
and inputting the training sample into the established face detection model, and outputting an accurate face frame and a key point position.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program performs the steps of the method for identifying a player in a snapshot of a run as claimed in any one of claims 1 to 4.
10. A non-transitory computer readable storage medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, implements the steps of the method for identifying a player in a snapshot of a running according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911170965.4A CN111160094A (en) | 2019-11-26 | 2019-11-26 | Method and device for identifying hand selection in running snapshot photo |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911170965.4A CN111160094A (en) | 2019-11-26 | 2019-11-26 | Method and device for identifying hand selection in running snapshot photo |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111160094A true CN111160094A (en) | 2020-05-15 |
Family
ID=70556068
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911170965.4A Pending CN111160094A (en) | 2019-11-26 | 2019-11-26 | Method and device for identifying hand selection in running snapshot photo |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111160094A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112633325A (en) * | 2020-11-28 | 2021-04-09 | 武汉虹信技术服务有限责任公司 | Personnel identification method and device based on tactical model, electronic equipment and readable medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107609108A (en) * | 2017-09-13 | 2018-01-19 | 杭州景联文科技有限公司 | A kind of sportsman's photo method for sorting based on number slip identification and recognition of face |
CN109376717A (en) * | 2018-12-14 | 2019-02-22 | 中科软科技股份有限公司 | Personal identification method, device, electronic equipment and the storage medium of face comparison |
WO2019128646A1 (en) * | 2017-12-28 | 2019-07-04 | 深圳励飞科技有限公司 | Face detection method, method and device for training parameters of convolutional neural network, and medium |
CN110147458A (en) * | 2019-05-24 | 2019-08-20 | 涂哲 | A kind of photo screening technique, system and electric terminal |
CN110458097A (en) * | 2019-08-09 | 2019-11-15 | 软通动力信息技术有限公司 | A kind of face picture recognition methods, device, electronic equipment and storage medium |
-
2019
- 2019-11-26 CN CN201911170965.4A patent/CN111160094A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107609108A (en) * | 2017-09-13 | 2018-01-19 | 杭州景联文科技有限公司 | A kind of sportsman's photo method for sorting based on number slip identification and recognition of face |
WO2019128646A1 (en) * | 2017-12-28 | 2019-07-04 | 深圳励飞科技有限公司 | Face detection method, method and device for training parameters of convolutional neural network, and medium |
CN109376717A (en) * | 2018-12-14 | 2019-02-22 | 中科软科技股份有限公司 | Personal identification method, device, electronic equipment and the storage medium of face comparison |
CN110147458A (en) * | 2019-05-24 | 2019-08-20 | 涂哲 | A kind of photo screening technique, system and electric terminal |
CN110458097A (en) * | 2019-08-09 | 2019-11-15 | 软通动力信息技术有限公司 | A kind of face picture recognition methods, device, electronic equipment and storage medium |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112633325A (en) * | 2020-11-28 | 2021-04-09 | 武汉虹信技术服务有限责任公司 | Personnel identification method and device based on tactical model, electronic equipment and readable medium |
CN112633325B (en) * | 2020-11-28 | 2022-08-05 | 武汉虹信技术服务有限责任公司 | Personnel identification method and device based on tactical model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109816092B (en) | Deep neural network training method and device, electronic equipment and storage medium | |
Yan et al. | Learning the change for automatic image cropping | |
WO2019184464A1 (en) | Detection of near-duplicate image | |
CN110428399B (en) | Method, apparatus, device and storage medium for detecting image | |
CN104933443A (en) | Automatic identifying and classifying method for sensitive data | |
CN107423306B (en) | Image retrieval method and device | |
AU2021203869B2 (en) | Methods, devices, electronic apparatuses and storage media of image processing | |
WO2021004186A1 (en) | Face collection method, apparatus, system, device, and medium | |
CN109815823B (en) | Data processing method and related product | |
CN111178252A (en) | Multi-feature fusion identity recognition method | |
CN112633221B (en) | Face direction detection method and related device | |
CN110460838B (en) | Lens switching detection method and device and computer equipment | |
CN112101315A (en) | Deep learning-based exercise judgment guidance method and system | |
CN112766065A (en) | Mobile terminal examinee identity authentication method, device, terminal and storage medium | |
CN111160094A (en) | Method and device for identifying hand selection in running snapshot photo | |
CN113705310A (en) | Feature learning method, target object identification method and corresponding device | |
Zhou et al. | Pose comparison based on part affinity fields | |
KR102683444B1 (en) | Apparatus for recognizing activity in sports video using cross granularity accumulation module and method thereof | |
CN112560728B (en) | Target object identification method and device | |
Fakhfour et al. | Video alignment using unsupervised learning of local and global features | |
CN113361568A (en) | Target identification method, device and electronic system | |
WO2021056531A1 (en) | Face gender recognition method, face gender classifier training method and device | |
CN113129252A (en) | Image scoring method and electronic equipment | |
CN113496243A (en) | Background music obtaining method and related product | |
CN116433939B (en) | Sample image generation method, training method, recognition method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200515 |