WO2020087922A1 - Facial attribute identification method, device, computer device and storage medium - Google Patents
Facial attribute identification method, device, computer device and storage medium Download PDFInfo
- Publication number
- WO2020087922A1 WO2020087922A1 PCT/CN2019/089662 CN2019089662W WO2020087922A1 WO 2020087922 A1 WO2020087922 A1 WO 2020087922A1 CN 2019089662 W CN2019089662 W CN 2019089662W WO 2020087922 A1 WO2020087922 A1 WO 2020087922A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face
- type
- node
- task
- image
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Definitions
- the present application relates to face attribute recognition, and in particular to a face attribute recognition method, a face attribute recognition device, a computer device, and a storage medium.
- Face recognition collects user images through a camera. When a face is detected, the user image is identified by attributes (such as gender, age, posture, or expression, etc.), and then the extracted attributes are compared with the preset attribute information stored in the database. Yes, and further realize the user's face recognition.
- attributes such as gender, age, posture, or expression, etc.
- a preferred embodiment of the present application provides a face attribute recognition method, including: constructing a parallel computing model, the parallel computing model includes a master node and a plurality of task nodes connected to the master node, for each task Nodes allocate corresponding storage space, where all the task nodes are divided into a first type task node and at least two second type task nodes, each second type task node corresponds to a face attribute; through the master node Acquire a target image, and allocate the target image to the storage space corresponding to the first type of task node, trigger the first type of task node to calculate the target image to identify whether the target image Including a face image; when the obtained target image includes a face image, extract the face image from the target image through the first type of task node, and return the extracted image to the master node Face image; assign the face image to the storage space corresponding to each task node of the second type through the master node to trigger all the The second-type task node performs parallel calculation on all face attributes of the stored face images to obtain the calculation
- a preferred embodiment of the present application further provides a face attribute recognition device, including: a building module for building a parallel computing model, the parallel computing model includes a master node and a plurality of tasks connected to the master node Node, allocates corresponding storage space for each task node, wherein all the task nodes are divided into a first type task node and at least two second type task nodes, each second type task node corresponds to a face attribute ;
- the distribution module is used to obtain a target image through the master node, and allocate the target image to the storage space corresponding to the first type of task node, triggering the first type of task node to the target image Performing calculations to identify whether the target image includes a face image;
- An extraction module for extracting the face image from the target image through the first-type task node when the acquired target image includes a face image; and a return module for returning to the master node
- the face image extracted by the extraction module is also used to allocate the face image to the storage space corresponding to each task node of the second type through the master node to trigger all the
- the second type of task node performs parallel calculation on all face attributes of the stored face images to obtain the calculation result of all face attributes
- the return module is also used to return each second type to the master node
- the calculation result of the face attributes of the task node triggers the master node to summarize the calculation results of the face attributes of all people.
- a preferred embodiment of the present application further provides a computer device, including a processor and a memory, at least one computer-readable instruction is stored in the memory, and the processor is used to execute the computer-readable instruction to implement the foregoing The face attribute recognition method mentioned above.
- a preferred embodiment of the present application further provides a non-volatile readable storage medium.
- the non-volatile readable storage medium stores at least one computer-readable instruction.
- the computer-readable instruction is executed by a processor Implement the face attribute recognition method as described above.
- This application uses the parallel computing model to calculate different face attributes of the face image in parallel, which is conducive to improving computing efficiency; the subsequent face attribute recognition method can be imported into various public occasions, such as stations, subways, shopping malls, Supermarkets, by acquiring the target images collected by the cameras on these occasions, and then recognizing various face attributes, and comparing the recognition results with the recognition results of the face attributes of specific groups of people (such as criminals), based on matching Screen the target population efficiently.
- FIG. 1 is a flowchart of a face attribute recognition method provided in Embodiment 1 of the present application.
- FIG. 2 is a schematic diagram of geometric normalization of a face image in the face attribute recognition method shown in FIG. 1.
- FIG. 3 is a schematic structural diagram of a face attribute recognition apparatus provided in Embodiment 2 of the present application.
- FIG. 4 is a schematic diagram of a computer device provided in Embodiment 3 of the present application.
- FIG. 1 is a flowchart of a face attribute recognition method provided by a preferred embodiment of the present application.
- the face attribute recognition method is applied to a computer device 1. According to different requirements, the order of steps of the face attribute recognition method may be changed, and some steps may be omitted or combined.
- the face attribute recognition method includes the following steps:
- Step S11 Construct a parallel computing model.
- the parallel computing model includes a master node and a plurality of task nodes connected to the master node, and a corresponding storage space is allocated to each task node, where all the task nodes are It is divided into a first type task node and at least two second type task nodes, and each second type task node corresponds to a face attribute.
- the parallel computing model may use a random access parallel machine (Parallel Random Access Machine, PRAM) model, an overall synchronous parallel computing model (Bulk Synchronous Parallel Computing Model, BSP), a LogP model, and a block distributed storage (Block Distributed Model, BDM) One of the models.
- the main node of the parallel computing model is the input point of the target image to be recognized and the task assignment node TaskTrack.
- the task node is an operation point of face attributes, that is, a task execution node JobTrack.
- the first type of task node is used to calculate and identify whether the target image includes a face image.
- the second type of task node is used to calculate and recognize the face attribute of the face image when the target image includes a face image.
- the face attributes include gender, age, ethnicity, expression, etc.
- the number of task nodes of the second type can be set according to the number of face attributes to be calculated, and the more face attributes to be calculated, the more the number of task nodes of the second type.
- the parallel computing model may include four task nodes of the second type, and the face attributes corresponding to the four task nodes of the second type are gender, age, race, and expression, respectively.
- Step S12 Obtain a target image through the master node, and allocate the target image to the storage space corresponding to the first-type task node to trigger the first-type task node to calculate the target image, To identify whether a face image is included in the target image.
- the user can send the processing request of the target image stored in the designated directory of the terminal device to the master node through a terminal device (such as a mobile phone, a tablet computer, etc.).
- a terminal device such as a mobile phone, a tablet computer, etc.
- the master node obtains the target image through the designated directory.
- the method before identifying whether the target image includes a face image, the method further includes: triggering the first type of task node to perform preprocessing on the target image to exclude interference from environmental factors such as light.
- the recognizing whether the target image includes a face image includes:
- the target image includes a human face image.
- the preset local feature may be a nose, eyes, mouth, etc.
- scanning the target image includes the preset local feature, it is determined that the target image includes a face image.
- Step S13 When the acquired target image includes a face image, extract the face image from the target image through the first-type task node to realize living body detection and recognition, and return the extracted to the master node The face image.
- the extracting the face image from the target image through the first type of task node includes:
- the first type of task node delimits a rectangular boundary around the face image, and extracts the human face according to the rectangular boundary image
- the normalization of the geometric characteristics can normalize the face image to the same position, Angle and size. Since the distance between two eyes of a person is basically the same for most people, the position of the two eyes is usually used as the basis for geometric normalization of the face image.
- the geometric normalization of the face image can be achieved through the following steps:
- Crop the face image according to a certain ratio For example, set point O in the figure to Midpoint, and After cropping, in the 2d ⁇ 2d image, the point O can be guaranteed to be fixed at (0.5d, d). This ensures the consistency of the face position and reflects the translation invariance of the face in the image plane;
- Step S14 the face image is allocated to the storage space corresponding to each task node of the second type through the master node, and all face attributes of the stored face images are triggered by all the task nodes of the second type Perform parallel calculations to obtain the calculation results of all face attributes.
- a deep learning framework (eg, TensorFlow framework) runs on each task node of the second type, and the deep learning framework calls the trained deep learning classifier model to calculate the face image to recognize Corresponding face attributes.
- the process of calculating gender using the deep learning framework includes:
- the gender characteristic parameters include hair (including beard) characteristic parameters, facial organ parameters, contour parameters and sexual characteristic characteristic parameters.
- hair including beard
- the active shape model algorithm to locate the facial feature points to obtain the chin area
- the skin color segmentation algorithm to separate the non-skin area of the chin.
- the beard color discrimination method is used to detect the beard in the non-skin area of the chin, and the parameters of the beard are extracted.
- a characteristic value can be assigned to the beard in the face image according to the color of the beard, for example, an initial value can be assigned according to the color or density of the beard, so as to obtain the beard characteristic parameter according to the preset initial value.
- facial image features can be extracted and classified by using local binary pattern methods (Local Binary Patterns), neural network methods, and SVM (Support Vector Machine, support vector machine) methods to obtain the gender characteristics parameter.
- local binary pattern methods Local Binary Patterns
- neural network methods neural network methods
- SVM Small Vector Machine, support vector machine
- Step S15 Return the calculation result of the face attributes of each task node of the second type to the master node, and trigger the master node to summarize the calculation results of the face attributes of all people.
- the master node can obtain the face image after being aggregated The calculation results of different face attributes.
- the calculation result of returning the face attribute of each task node of the second type to the master node includes:
- the master node regularly sends the polling request message to other task nodes of the second type that have not returned the calculation result, so that when the other task nodes of the second type have finished calculating the face attributes, The calculation result of the task node of the second type is returned to the master node.
- step S16 the summary result of the master node is output.
- the computer device includes a display screen, and the summary result is displayed on the display screen.
- the display screen may be a liquid crystal display (Liquid Crystal) (LCD) or an organic light-emitting diode (Organic Light-Emitting Diode, OLED) display.
- LCD Liquid Crystal
- OLED Organic Light-Emitting Diode
- the summary result can also be played through the microphone of the computer device.
- FIG. 1 details the face attribute recognition method of the present application.
- the functional modules of the software device implementing the face attribute recognition method and the hardware device implementing the face attribute recognition method The architecture is introduced.
- FIG. 3 is a schematic structural diagram of a face attribute recognition device 300 according to a preferred embodiment of the present application.
- the face attribute recognition device 300 runs on a computer device.
- the face attribute recognition device 300 may include a plurality of function modules composed of program code segments.
- the program codes of each program segment of the face attribute recognition device 300 may be stored in the memory of the computer device and executed by the at least one processor to implement the face attribute recognition function.
- the face attribute recognition device 10 may be divided into multiple functional modules according to the functions it performs.
- the functional modules may include: a construction module 101, a distribution module 102, an extraction module 103, a return module 104, and an output module 105.
- the module referred to in this application refers to a series of computer program segments that can be executed by at least one processor and can perform fixed functions, and are stored in a memory.
- the functions of each module will be described in detail in subsequent embodiments.
- the building module 101 is used to build a parallel computing model, the parallel computing model includes a master node and a plurality of task nodes connected to the master node, and the building module 101 is also used to assign a corresponding to each task node Storage space in which all the task nodes are divided into a first type task node and at least two second type task nodes, and each second type task node corresponds to a face attribute.
- the parallel computing model may use a random access parallel machine (Parallel Random Access Machine, PRAM) model, an overall synchronous parallel computing model (Bulk Synchronous Parallel Computing Model, BSP), a LogP model, and a block distributed storage (Block Distributed Model, BDM) One of the models.
- the main node of the parallel computing model is the input point of the target image to be recognized and the task assignment node TaskTrack.
- the task node is an operation point of face attributes, that is, a task execution node JobTrack.
- the first type of task node is used to calculate and identify whether the target image includes a face image.
- the second type of task node is used to calculate and recognize the face attribute of the face image when the target image includes a face image.
- the face attributes include gender, age, ethnicity, expression, etc.
- the number of task nodes of the second type can be set according to the number of face attributes to be calculated, and the more face attributes to be calculated, the more the number of task nodes of the second type.
- the parallel computing model may include four task nodes of the second type, and the face attributes corresponding to the four task nodes of the second type are gender, age, race, and expression, respectively.
- the allocation module 102 is configured to acquire a target image through the master node, and allocate the target image to a storage space corresponding to the first-type task node, to trigger the first-type task node to the target
- the image is calculated to identify whether the target image includes a face image.
- the user can send the processing request of the target image stored in the designated directory of the terminal device to the master node through a terminal device (such as a mobile phone, a tablet computer, etc.).
- a terminal device such as a mobile phone, a tablet computer, etc.
- the master node obtains the target image through the designated directory.
- the allocation module 102 is also used to trigger the first type of task node to pre-process the target image to exclude environmental factors such as light Interference.
- the assignment module 102 triggers the first-type task node to scan whether the target image contains information about preset local features of a human face, when the image contains information about preset local features of a human face To determine that the target image includes a face image.
- the preset local feature may be a nose, eyes, mouth, etc.
- scanning the target image includes the preset local feature, it is determined that the target image includes a face image.
- the extraction module 103 is configured to extract the face image from the target image through the first-type task node when the acquired target image includes a face image to implement living body detection and recognition.
- the return module 104 is used to return the face image extracted by the extraction module 103 to the master node.
- the extraction module 103 determines the number of face images included in the target image through the first type of task node, when the number of face images included in the target image is only one , The first type of task node delimits a rectangular border around the face image, and extracts the face image according to the rectangular border; when the number of face images included in the target image is at least two , The first type of task node delimits the rectangular boundary around each face image, calculates the area of the face image defined by each rectangular boundary, and selects one of the largest rectangular boundaries to extract the face image, Then, the extracted face image is subjected to normalization processing of geometric characteristics through the first-type task node, thereby extracting the face image from the target image.
- the normalization processing of the geometric characteristics can normalize the face image to the same position, angle and size. Since the distance between two eyes of a person is basically the same for most people, the position of the two eyes is usually used as the basis for geometric normalization of the face image.
- the extraction module 103 can realize the geometric normalization of the face image through the following steps:
- Crop the face image according to a certain ratio For example, set point O in the figure to Midpoint, and After cropping, in the 2d ⁇ 2d image, the point O can be guaranteed to be fixed at (0.5d, d). This ensures the consistency of the face position and reflects the translation invariance of the face in the image plane;
- the allocation module 102 is further configured to allocate the face image to the storage space corresponding to each second-type task node through the master node, and trigger all the second-type task nodes to store the face image
- the face attributes of all people are calculated in parallel, so as to obtain the calculation result of all face attributes.
- a deep learning framework (eg, TensorFlow framework) runs on each task node of the second type, and the deep learning framework calls the trained deep learning classifier model to calculate the face image to recognize Corresponding face attributes.
- the distribution module 102 obtains the gender characteristic parameters of the face image and establishes a parameter model, calls a face gender classifier model, and compares the parameter model and the face gender The classifier models are compared to calculate the gender using the deep learning framework.
- the gender characteristic parameters include hair (including beard) characteristic parameters, facial organ parameters, contour parameters and sexual characteristic characteristic parameters.
- hair including beard
- the active shape model algorithm to locate the facial feature points to obtain the chin area
- the skin color segmentation algorithm to separate the chin non-skin color area.
- the beard color discrimination method is used to detect the beard in the non-skin area of the chin, and the parameters of the beard are extracted.
- a characteristic value can be assigned to the beard in the face image according to the color of the beard, for example, an initial value can be assigned according to the color or density of the beard, so as to obtain the beard characteristic parameter according to the preset initial value.
- facial image features can be extracted and classified by using local binary pattern methods (Local Binary Patterns), neural network methods, and SVM (Support Vector Machine, support vector machine) methods to obtain the gender characteristics parameter.
- local binary pattern methods Local Binary Patterns
- neural network methods neural network methods
- SVM Small Vector Machine, support vector machine
- the return module 104 is further used to return the calculation result of the face attributes of each task node of the second type to the master node, and trigger the master node to summarize the calculation results of the face attributes of all people.
- the master node can obtain the face image after being aggregated The calculation results of different face attributes.
- the return module 104 polls each task node of the second type through the master node, and sends a poll request message to each task node of the second type, so that when the When the polling request message is completed and at least one task node of the second type completes calculating the face attribute, the calculation result of the task node of the second type is returned to the master node, and the return module 104 further passes the
- the master node regularly sends the polling request message to other task nodes of the second type that have not returned the calculation result, so that the other task nodes of the second type send the request to the master node when the face attributes are calculated.
- the calculation result of the task node of the second type is returned.
- the output module 105 is used to output the summary result of the master node.
- the computer device includes a display screen, and the summary result is displayed on the display screen.
- the display screen may be a liquid crystal display (Liquid Crystal) (LCD) or an organic light-emitting diode (Organic Light-Emitting Diode, OLED) display.
- LCD Liquid Crystal
- OLED Organic Light-Emitting Diode
- the summary result can also be played through the microphone of the computer device.
- the face attribute recognition device in the embodiment of the present application calculates different face attributes of the face image in parallel, which is beneficial to improve the operation efficiency;
- the subsequent face attribute recognition method It can be imported into various public occasions, such as stations, subways, shopping malls, and supermarkets, by acquiring the target images collected by the cameras of these occasions, and then recognizing various facial attributes, and comparing the recognition results with specific people (such as criminals) ) The recognition results of the face attributes are compared, so as to efficiently screen the target population according to the matching degree.
- FIG. 4 is a schematic structural diagram of a computer device 1 that implements the face attribute recognition method in a preferred embodiment of the present application.
- the computer device 1 includes a memory 20, a processor 30, and computer-readable instructions 40 stored in the memory 20 and executable on the processor 30, such as a face attribute recognition program.
- Step S11 Construct a parallel computing model.
- the parallel computing model includes a master node and a plurality of task nodes connected to the master node, and a corresponding storage space is allocated to each task node, where all the task nodes are It is divided into a first type task node and at least two second type task nodes, and each second type task node corresponds to a face attribute.
- the parallel computing model may use a random access parallel machine (Parallel Random Access Machine, PRAM) model, an overall synchronous parallel computing model (Bulk Synchronous Parallel Computing Model, BSP), a LogP model and a block distributed storage (Block Distributed Model, BDM) One of the models.
- the main node of the parallel computing model is the input point of the target image to be recognized and the task assignment node TaskTrack.
- the task node is an operation point of face attributes, that is, a task execution node JobTrack.
- the first type of task node is used to calculate and identify whether the target image includes a face image.
- the second type of task node is used to calculate and recognize the face attribute of the face image when the target image includes a face image.
- the face attributes include gender, age, ethnicity, expression, etc.
- the number of task nodes of the second type can be set according to the number of face attributes to be calculated, and the more face attributes to be calculated, the more the number of task nodes of the second type.
- the parallel computing model may include four task nodes of the second type, and the face attributes corresponding to the four task nodes of the second type are gender, age, race, and expression, respectively.
- Step S12 Obtain a target image through the master node, and allocate the target image to the storage space corresponding to the first-type task node to trigger the first-type task node to calculate the target image, To identify whether a face image is included in the target image.
- the user can send the processing request of the target image stored in the designated directory of the terminal device to the master node through a terminal device (such as a mobile phone, a tablet computer, etc.).
- a terminal device such as a mobile phone, a tablet computer, etc.
- the master node obtains the target image through the designated directory.
- the method before identifying whether the target image includes a face image, the method further includes: triggering the first type of task node to perform preprocessing on the target image to exclude interference from environmental factors such as light.
- the recognizing whether the target image includes a face image includes:
- the target image includes a human face image.
- the preset local feature may be a nose, eyes, mouth, etc.
- scanning the target image includes the preset local feature, it is determined that the target image includes a face image.
- Step S13 When the acquired target image includes a face image, extract the face image from the target image through the first-type task node to realize living body detection and recognition, and return the extracted to the master node The face image.
- the extracting the face image from the target image through the first type of task node includes:
- the first type of task node delimits a rectangular boundary around the face image, and extracts the human face according to the rectangular boundary image
- the normalization of the geometric characteristics can normalize the face image to the same position, Angle and size. Since the distance between two eyes of a person is basically the same for most people, the position of the two eyes is usually used as the basis for geometric normalization of the face image.
- the geometric normalization of the face image can be achieved through the following steps:
- Crop the face image according to a certain ratio For example, set point O in the figure to Midpoint, and After cropping, in the 2d ⁇ 2d image, the point O can be guaranteed to be fixed at (0.5d, d). This ensures the consistency of the face position and reflects the translation invariance of the face in the image plane;
- Step S14 the face image is allocated to the storage space corresponding to each task node of the second type through the master node, and all face attributes of the stored face images are triggered by all the task nodes of the second type Perform parallel calculations to obtain the calculation results of all face attributes.
- a deep learning framework (eg, TensorFlow framework) runs on each task node of the second type, and the deep learning framework calls the trained deep learning classifier model to calculate the face image to recognize Corresponding face attributes.
- the process of calculating gender using the deep learning framework includes:
- the gender characteristic parameters include hair (including beard) characteristic parameters, facial organ parameters, contour parameters and sexual characteristic characteristic parameters.
- hair including beard
- the active shape model algorithm to locate the facial feature points to obtain the chin area
- the skin color segmentation algorithm to separate the chin non-skin color area
- the beard color discrimination method is used to detect the beard in the non-skin area of the chin, and the parameters of the beard are extracted.
- a characteristic value can be assigned to the beard in the face image according to the color of the beard, for example, an initial value can be assigned according to the color or density of the beard, so as to obtain the beard characteristic parameter according to the preset initial value.
- facial image features can be extracted and classified by using local binary pattern methods (Local Binary Patterns), neural network methods, and SVM (Support Vector Machine, support vector machine) methods to obtain the gender characteristics parameter.
- local binary pattern methods Local Binary Patterns
- neural network methods neural network methods
- SVM Small Vector Machine, support vector machine
- Step S15 Return the calculation result of the face attributes of each task node of the second type to the master node, and trigger the master node to summarize the calculation results of the face attributes of all people.
- the master node can obtain the face image after being aggregated The calculation results of different face attributes.
- the calculation result of returning the face attribute of each task node of the second type to the master node includes:
- the master node regularly sends the polling request message to other task nodes of the second type that have not returned the calculation result, so that when the other task nodes of the second type have finished calculating the face attributes, The calculation result of the task node of the second type is returned to the master node.
- step S16 the summary result of the master node is output.
- the computer device includes a display screen, and the summary result is displayed on the display screen.
- the display screen may be a liquid crystal display (Liquid Crystal) (LCD) or an organic light-emitting diode (Organic Light-Emitting Diode, OLED) display.
- LCD Liquid Crystal
- OLED Organic Light-Emitting Diode
- the summary result can also be played through the microphone of the computer device.
- each module / unit in the embodiment of the face attribute recognition device described above such as the units 101-105 in FIG. 3, are implemented.
- the computer-readable instructions 40 may be divided into one or more modules / units, the one or more modules / units are stored in the memory 20 and executed by the processor 30, To complete this application.
- the one or more modules / units may be a series of computer-readable instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer-readable instructions 40 in the computer device 1.
- the computer-readable instructions 40 may be divided into a building module 101, an allocation module 102, an extraction module 103, a return module 104, and an output module 105 in FIG.
- the computer device 1 may be a computing device such as a desktop computer, a notebook, a palmtop computer and a cloud server.
- a person skilled in the art may understand that the schematic diagram is only an example of the computer device 1 and does not constitute a limitation on the computer device 1, and may include more or less components than the illustration, or a combination of certain components, or different Components, for example, the computer device 1 may also include input and output devices, network access devices, buses, and the like.
- the so-called processor 30 may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application-specific integrated circuits (Application Specific Integrated Circuit, ASIC), Ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
- the general-purpose processor may be a microprocessor or the processor 30 may also be any conventional processor, etc.
- the processor 30 is the control center of the computer device 1 and connects the entire computer device 1 using various interfaces and lines Various parts.
- the memory 20 may be used to store the computer-readable instructions 40 and / or modules / units, and the processor 30 executes or executes the computer-readable instructions and / or modules / units stored in the memory 20, and The data stored in the memory 20 is called to realize various functions of the computer device 1.
- the memory 20 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one function required application programs (such as sound playback function, image playback function, etc.); the storage data area may Data (such as audio data, phone book, etc.) created according to the use of the computer device 1 is stored.
- the memory 20 may include a high-speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a smart memory card (Smart, Media, Card, SMC), and a secure digital (SD) Card, flash memory card (Flash), at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
- non-volatile memory such as a hard disk, a memory, a plug-in hard disk, a smart memory card (Smart, Media, Card, SMC), and a secure digital (SD) Card, flash memory card (Flash), at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
- the module / unit integrated in the computer device 1 is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a non-volatile readable storage medium.
- the present application can implement all or part of the processes in the methods of the above embodiments, and can also be completed by instructing relevant hardware through computer-readable instructions.
- the computer-readable instructions when executed by the processor, can implement the steps of the foregoing method embodiments.
- the computer readable instructions include computer readable instruction codes, and the computer readable instruction codes may be in source code form, object code form, executable file, or some intermediate form, etc.
- the non-volatile readable medium may include: any entity or device capable of carrying the computer-readable instruction code, a recording medium, a U disk, a mobile hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media.
- ROM Read-Only Memory
- RAM Random Access Memory
- electrical carrier signals telecommunications signals
- telecommunications signals and software distribution media.
- the content contained in the non-volatile readable medium can be appropriately increased or decreased according to the requirements of legislation and patent practice in jurisdictions. For example, in some jurisdictions, according to legislation and patent practice, non- Volatile readable media does not include electrical carrier signals and telecommunication signals.
- the functional units in the embodiments of the present application may be integrated in the same processing unit, or each unit may exist alone physically, or two or more units may be integrated in the same unit.
- the above integrated unit can be implemented in the form of hardware, or in the form of hardware plus software function modules.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
A facial attribute identification method, an identification device, a computer device and a storage medium, wherein the method comprises: constructing a parallel computation model, comprising a main node and multiple task nodes connected to the main node (S11); acquiring a target image by means of the main node, allocating the target image to a storage space corresponding to a first category of task nodes, and triggering the first category of task nodes to identify whether the target image comprises a facial image (S12); when the target image comprises a facial image, extracting the facial image by means of the first category of task nodes, and returning the facial image to the main node (S13); by means of the main node, allocating the facial image to a storage space corresponding to each of a second category of task nodes, and triggering all of the second category of task nodes to perform parallel computation on all of the facial attributes of the facial image (S14); returning a computation result of the facial attributes of each of the second category of task nodes to the main node, and triggering the main node to summarize the computation results of all of the facial attributes (S15); and outputting the summarized result of the main node (S16). In the method, different facial attributes of a facial image are computed by means of a parallel computation model to improve computational efficiency and help the subsequent searching of a face, thereby effectively screening a target population.
Description
本申请要求于2018年10月30日提交中国专利局,申请号为201811281100.0、发明名称为“人脸属性识别方法、装置、计算机装置及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application requires the priority of the Chinese patent application submitted to the China Patent Office on October 30, 2018, with the application number 201811281100.0 and the invention titled "face recognition method, device, computer device and storage medium". The reference is incorporated in this application.
本申请涉及人脸属性识别,具体涉及一种人脸属性识别方法、人脸属性识别装置、计算机装置以及存储介质。The present application relates to face attribute recognition, and in particular to a face attribute recognition method, a face attribute recognition device, a computer device, and a storage medium.
在日常生活中,对身份识别的需求广泛存在于各行各业,例如金融服务、海关出入境、国家安全等领域,都需要识别人的身份。随着科技的发展,生物计量学在身份识别领域的优点越来越明显,其中,人脸识别是近年来发展较为迅速的一个研究方向。人脸识别通过摄像头采集用户图像,在检测到人脸时,对用户图像进行属性(如:性别、年龄、姿态或表情等)识别,然后将提取的属性与数据库保存的预设属性信息进行比对,进而实现对用户的人脸识别。In daily life, the need for identification widely exists in all walks of life, such as financial services, customs entry and exit, national security and other fields, all need to identify the identity of people. With the development of technology, the advantages of biometrics in the field of identity recognition are becoming more and more obvious. Among them, face recognition is a research direction that has developed rapidly in recent years. Face recognition collects user images through a camera. When a face is detected, the user image is identified by attributes (such as gender, age, posture, or expression, etc.), and then the extracted attributes are compared with the preset attribute information stored in the database. Yes, and further realize the user's face recognition.
然而,现有技术中不同属性的识别通常是基于相互独立的算法,导致每次仅能就一种属性对人脸进行识别,识别效率较低。However, the recognition of different attributes in the prior art is usually based on mutually independent algorithms, which results in only one attribute being able to recognize a face at a time, and the recognition efficiency is low.
发明内容Summary of the invention
鉴于以上内容,有必要提出一种能够提高识别效率的人脸属性识别方法、人脸属性识别装置、计算机装置以及存储介质,从而解决以上问题。In view of the above, it is necessary to propose a face attribute recognition method, a face attribute recognition device, a computer device, and a storage medium that can improve recognition efficiency, so as to solve the above problems.
本申请一较佳实施方式提供一种人脸属性识别方法,包括:构建一并行计算模型,所述并行计算模型包括一主节点以及与所述主节点连接的多个任务节点,为每一任务节点分配对应的存储空间,其中,所有所述任务节点被划分为一个第一类任务节点以及至少两个第二类任务节点,每一第二类任务节点对应一人脸属性;通过所述主节点获取一目标图像,并将所述目标图像分配至所述第一类任务节点对应的存储空间中,触发所述第一类任务节点对所述目标图像进行计算,以识别所述目标图像中是否包括人脸图像;当所获取的目标图像中包括人脸图像时,通过所述第一类任务节点从所述目标图像中提取所述人脸图像,并向所述主节点返回所提取的所述人脸图像;通过所述主节点将所述人脸图像分配至每一第二类任务节点对应的存储空间中,触发所有所述第二类任务节点对所存储的人脸图像的所有人脸属性进行并行计算,从而获得所有人脸属性的计算结果;以及向所述主节点返回每一第二类任务节点的人脸属性的计算结果,触发所述主节点将所有人脸属性的计算结果进行汇总。A preferred embodiment of the present application provides a face attribute recognition method, including: constructing a parallel computing model, the parallel computing model includes a master node and a plurality of task nodes connected to the master node, for each task Nodes allocate corresponding storage space, where all the task nodes are divided into a first type task node and at least two second type task nodes, each second type task node corresponds to a face attribute; through the master node Acquire a target image, and allocate the target image to the storage space corresponding to the first type of task node, trigger the first type of task node to calculate the target image to identify whether the target image Including a face image; when the obtained target image includes a face image, extract the face image from the target image through the first type of task node, and return the extracted image to the master node Face image; assign the face image to the storage space corresponding to each task node of the second type through the master node to trigger all the The second-type task node performs parallel calculation on all face attributes of the stored face images to obtain the calculation result of all face attributes; and returns the calculation of the face attributes of each second-type task node to the master node As a result, the master node is triggered to summarize the calculation results of all face attributes.
本申请一较佳实施方式还提供一种人脸属性识别装置,包括:构建模块,用于构建一并行计算模型,所述并行计算模型包括一主节点以及与所述主节点连接的多个任务节点,为每一任务节点分配对应的存储空间,其中,所有所述任务节点被划分为一个第一类任务节点以及至少两个第二类任务节点,每一第 二类任务节点对应一人脸属性;分配模块,用于通过所述主节点获取一目标图像,并将所述目标图像分配至所述第一类任务节点对应的存储空间中,触发所述第一类任务节点对所述目标图像进行计算,以识别所述目标图像中是否包括人脸图像;A preferred embodiment of the present application further provides a face attribute recognition device, including: a building module for building a parallel computing model, the parallel computing model includes a master node and a plurality of tasks connected to the master node Node, allocates corresponding storage space for each task node, wherein all the task nodes are divided into a first type task node and at least two second type task nodes, each second type task node corresponds to a face attribute ; The distribution module is used to obtain a target image through the master node, and allocate the target image to the storage space corresponding to the first type of task node, triggering the first type of task node to the target image Performing calculations to identify whether the target image includes a face image;
提取模块,用于当所获取的目标图像中包括人脸图像时,通过所述第一类任务节点从所述目标图像中提取所述人脸图像;以及返回模块,用于向所述主节点返回所述提取模块所提取的所述人脸图像;所述分配模块还用于通过所述主节点将所述人脸图像分配至每一第二类任务节点对应的存储空间中,触发所有所述第二类任务节点对所存储的人脸图像的所有人脸属性进行并行计算,从而获得所有人脸属性的计算结果;以及所述返回模块还用于向所述主节点返回每一第二类任务节点的人脸属性的计算结果,触发所述主节点将所有人脸属性的计算结果进行汇总。An extraction module for extracting the face image from the target image through the first-type task node when the acquired target image includes a face image; and a return module for returning to the master node The face image extracted by the extraction module; the allocation module is also used to allocate the face image to the storage space corresponding to each task node of the second type through the master node to trigger all the The second type of task node performs parallel calculation on all face attributes of the stored face images to obtain the calculation result of all face attributes; and the return module is also used to return each second type to the master node The calculation result of the face attributes of the task node triggers the master node to summarize the calculation results of the face attributes of all people.
本申请一较佳实施方式还提供一种计算机装置,包括处理器和存储器,所述存储器中存储有至少一个计算机可读指令,所述处理器用于执行所述计算机可读指令以实现如前所述的人脸属性识别方法。A preferred embodiment of the present application further provides a computer device, including a processor and a memory, at least one computer-readable instruction is stored in the memory, and the processor is used to execute the computer-readable instruction to implement the foregoing The face attribute recognition method mentioned above.
本申请一较佳实施方式还提供一种非易失性可读存储介质,所述非易失性可读存储介质上存储至少一个计算机可读指令,所述计算机可读指令被处理器执行时实现如前所述的人脸属性识别方法。A preferred embodiment of the present application further provides a non-volatile readable storage medium. The non-volatile readable storage medium stores at least one computer-readable instruction. When the computer-readable instruction is executed by a processor Implement the face attribute recognition method as described above.
本申请通过所述并行计算模型并行计算所述人脸图像的不同人脸属性,有利于提高运算效率;后续所述人脸属性识别方法可导入各种公共场合,例如:车站、地铁、商场、超市,通过获取该些场合的摄像头采集的目标图像,然后进行各项人脸属性的识别,并将识别结果与特定人群(如犯罪人员)的人脸属性的识别结果做比对,从而根据匹配度高效筛选目标人群。This application uses the parallel computing model to calculate different face attributes of the face image in parallel, which is conducive to improving computing efficiency; the subsequent face attribute recognition method can be imported into various public occasions, such as stations, subways, shopping malls, Supermarkets, by acquiring the target images collected by the cameras on these occasions, and then recognizing various face attributes, and comparing the recognition results with the recognition results of the face attributes of specific groups of people (such as criminals), based on matching Screen the target population efficiently.
图1是本申请实施例一提供的人脸属性识别方法的流程图。FIG. 1 is a flowchart of a face attribute recognition method provided in Embodiment 1 of the present application.
图2是图1所示的人脸属性识别方法中人脸图像的几何归一化的示意图。2 is a schematic diagram of geometric normalization of a face image in the face attribute recognition method shown in FIG. 1.
图3是本申请实施例二提供的人脸属性识别装置的结构示意图。FIG. 3 is a schematic structural diagram of a face attribute recognition apparatus provided in Embodiment 2 of the present application.
图4是本申请实施例三提供的计算机装置示意图。4 is a schematic diagram of a computer device provided in Embodiment 3 of the present application.
图1是本申请一较佳实施例提供的人脸属性识别方法的流程图。所述人脸属性识别方法应用于一计算机装置1中。根据不同需求,所述人脸属性识别方法的步骤顺序可以改变,某些步骤可以省略或合并。所述人脸属性识别方法包括以下步骤:FIG. 1 is a flowchart of a face attribute recognition method provided by a preferred embodiment of the present application. The face attribute recognition method is applied to a computer device 1. According to different requirements, the order of steps of the face attribute recognition method may be changed, and some steps may be omitted or combined. The face attribute recognition method includes the following steps:
步骤S11:构建一并行计算模型,所述并行计算模型包括一主节点以及与所述主节点连接的多个任务节点,为每一任务节点分配对应的存储空间,其中,所有所述任务节点被划分为一个第一类任务节点以及至少两个第二类任务节点,每一第二类任务节点对应一人脸属性。Step S11: Construct a parallel computing model. The parallel computing model includes a master node and a plurality of task nodes connected to the master node, and a corresponding storage space is allocated to each task node, where all the task nodes are It is divided into a first type task node and at least two second type task nodes, and each second type task node corresponds to a face attribute.
其中,所述并行计算模型可以采用随机存取并行机器(Parallel Random Access Machine,PRAM)模型、整体同步并行计算模型(Bulk Synchronous Parallel Computing Model,BSP)、LogP模型以及块分布存储(Block Distributed Model, BDM)模型中的其中一种。所述并行计算模型的主节点为待识别的目标图像的输入点以及任务分配节点TaskTrack。所述任务节点为人脸属性的运算点,即任务执行节点JobTrack。所述第一类任务节点用于计算及识别所述目标图像是否包括人脸图像。所述第二类任务节点用于当所述目标图像包括人脸图像时,计算并识别所述人脸图像的人脸属性。Among them, the parallel computing model may use a random access parallel machine (Parallel Random Access Machine, PRAM) model, an overall synchronous parallel computing model (Bulk Synchronous Parallel Computing Model, BSP), a LogP model, and a block distributed storage (Block Distributed Model, BDM) One of the models. The main node of the parallel computing model is the input point of the target image to be recognized and the task assignment node TaskTrack. The task node is an operation point of face attributes, that is, a task execution node JobTrack. The first type of task node is used to calculate and identify whether the target image includes a face image. The second type of task node is used to calculate and recognize the face attribute of the face image when the target image includes a face image.
其中,所述人脸属性包括性别、年龄、种族、表情等。所述第二类任务节点的个数可以根据所要计算的人脸属性的个数进行设置,所要计算的人脸属性越多,所述第二类任务节点的个数也就越多。如,所述并行计算模型可包括四个第二类任务节点,所述四个第二类任务节点分别对应的人脸属性为性别、年龄、种族以及表情。Wherein, the face attributes include gender, age, ethnicity, expression, etc. The number of task nodes of the second type can be set according to the number of face attributes to be calculated, and the more face attributes to be calculated, the more the number of task nodes of the second type. For example, the parallel computing model may include four task nodes of the second type, and the face attributes corresponding to the four task nodes of the second type are gender, age, race, and expression, respectively.
步骤S12:通过所述主节点获取一目标图像,并将所述目标图像分配至所述第一类任务节点对应的存储空间中,触发所述第一类任务节点对所述目标图像进行计算,以识别所述目标图像中是否包括人脸图像。Step S12: Obtain a target image through the master node, and allocate the target image to the storage space corresponding to the first-type task node to trigger the first-type task node to calculate the target image, To identify whether a face image is included in the target image.
其中,用户可通过一终端装置(如手机、平板电脑等)向所述主节点发送存储于所述终端装置的指定目录下的上述目标图像的处理请求。所述主节点在接收到所述处理请求时,通过所述指定目录获取所述目标图像。Wherein, the user can send the processing request of the target image stored in the designated directory of the terminal device to the master node through a terminal device (such as a mobile phone, a tablet computer, etc.). When receiving the processing request, the master node obtains the target image through the designated directory.
在本实施方式中,所述识别所述目标图像中是否包括人脸图像之前,还包括:触发所述第一类任务节点对所述目标图像进行预处理以便排除光线等环境因素的干扰。In this embodiment, before identifying whether the target image includes a face image, the method further includes: triggering the first type of task node to perform preprocessing on the target image to exclude interference from environmental factors such as light.
由于光照的影响,所述目标图像在拍摄过程中存在许多不确定性,如光强、光源方向、色彩等,使得所述目标图像的灰度深浅不均匀,人脸在局部上对比度较大,从而影响最终识别的效果,所以有必要对获取的所述目标图像采用光线调整技术进行光线调整。Due to the influence of light, there are many uncertainties in the target image during the shooting process, such as light intensity, light source direction, color, etc., so that the gray scale of the target image is uneven, and the face has a large local contrast. As a result, the final recognition effect is affected, so it is necessary to use light adjustment technology to adjust the light to the acquired target image.
进一步地,所述识别所述目标图像中是否包括人脸图像包括:Further, the recognizing whether the target image includes a face image includes:
a)触发所述第一类任务节点扫描所述目标图像中是否包含人脸的预设局部特征的信息;a) Trigger the first type of task node to scan whether the target image contains information of preset local features of a human face;
b)当所述图像中包含人脸的预设局部特征的信息时,判断所述目标图像中包括人脸图像。b) When the image contains information on preset local features of a human face, it is determined that the target image includes a human face image.
具体地,所述预设局部特征可为鼻子、眼睛、嘴巴等,当扫描所述目标图像包括所述预设局部特征时,则判断所述目标图像中包括人脸图像。Specifically, the preset local feature may be a nose, eyes, mouth, etc. When scanning the target image includes the preset local feature, it is determined that the target image includes a face image.
步骤S13:当所获取的目标图像中包括人脸图像时,通过所述第一类任务节点从所述目标图像中提取所述人脸图像以实现活体检测识别,并向所述主节点返回所提取的所述人脸图像。Step S13: When the acquired target image includes a face image, extract the face image from the target image through the first-type task node to realize living body detection and recognition, and return the extracted to the master node The face image.
在本实施方式中,所述通过所述第一类任务节点从所述目标图像中提取所述人脸图像包括:In this embodiment, the extracting the face image from the target image through the first type of task node includes:
a)通过所述第一类任务节点判断所述目标图像中包括的人脸图像的数量;a) judging the number of face images included in the target image by the first type of task node;
b)当所述目标图像中包括的人脸图像的数量仅为一个时,通过所述第一类任务节点划定围绕所述人脸图像矩形边界,并根据所述矩形边界提取所述人脸图像;b) When the number of face images included in the target image is only one, the first type of task node delimits a rectangular boundary around the face image, and extracts the human face according to the rectangular boundary image;
c)当所述目标图像中包括的人脸图像的数量为至少两个时,通过所述第一 类任务节点分别划定围绕每一人脸图像的矩形边界,计算每一矩形边界所界定的人脸图像的面积,并选择面积最大的其中一矩形边界提取所述人脸图像;c) When the number of face images included in the target image is at least two, the rectangular boundaries around each face image are delimited by the first-type task nodes, and the people defined by each rectangular boundary are calculated The area of the face image, and select one of the largest rectangular boundaries to extract the face image;
d)通过所述第一类任务节点将所提取的人脸图像进行几何特性的归一化处理,所述几何特性的归一化处理可以使所述人脸图像归一化到相同的位置、角度和大小。由于人的两眼之间的距离对于大多数人来说是基本相同的,因此,两只眼睛的位置通常被用作人脸图像几何归一化的依据。d) Normalize the geometric characteristics of the extracted face image through the first type of task node, the normalization of the geometric characteristics can normalize the face image to the same position, Angle and size. Since the distance between two eyes of a person is basically the same for most people, the position of the two eyes is usually used as the basis for geometric normalization of the face image.
具体地,如图2所示,假设人脸图像中两只眼睛的位置分别为El和Er,则通过下述步骤,可以实现人脸图像的几何归一化:Specifically, as shown in FIG. 2, assuming that the positions of the two eyes in the face image are El and Er, respectively, the geometric normalization of the face image can be achieved through the following steps:
a)旋转所述人脸图像,以使E
l和E
r的连线
保持水平。这保证了人脸方向的一致性,体现了人脸在图像平面内的旋转不变性;
a) rotating the facial image, so that E l and E r connection maintain standard. This ensures the consistency of the direction of the face, and reflects the rotation invariance of the face in the image plane;
b)根据一定比例裁剪所述人脸图像。例如,设定图中点O为
的中点,且
经过裁剪,在2d×2d的图像内,可保证点O固定与(0.5d,d)处。这保证了人脸位置的一致性,体现了人脸在图像平面内的平移不变性;
b) Crop the face image according to a certain ratio. For example, set point O in the figure to Midpoint, and After cropping, in the 2d × 2d image, the point O can be guaranteed to be fixed at (0.5d, d). This ensures the consistency of the face position and reflects the translation invariance of the face in the image plane;
c)将裁剪后的图像缩小和放大处理,得到统一尺寸并符合标准的人脸图像。例如,若规定图像的大小是128×128像素点,也就是使
为定长(64个像素),则缩放倍数为β=2d/128。这保证了人脸大小的一致性,体现了人脸在图像平面内的尺度不变性。
c) Reduce and enlarge the cropped image to obtain a face image with a uniform size and conforming to the standard. For example, if the size of the specified image is 128 × 128 pixels, that is to say For a fixed length (64 pixels), the scaling factor is β = 2d / 128. This ensures the consistency of the size of the face, and reflects the scale invariance of the face in the image plane.
步骤S14,通过所述主节点将所述人脸图像分配至每一第二类任务节点对应的存储空间中,触发所有所述第二类任务节点对所存储的人脸图像的所有人脸属性进行并行计算,从而获得所有人脸属性的计算结果。Step S14, the face image is allocated to the storage space corresponding to each task node of the second type through the master node, and all face attributes of the stored face images are triggered by all the task nodes of the second type Perform parallel calculations to obtain the calculation results of all face attributes.
在本实施方式中,每一第二类任务节点上运行有深度学习框架(如,TensorFlow框架),所述深度学习框架调用训练好的深度学习分类器模型对所述人脸图像进行计算以识别对应的人脸属性。In this embodiment, a deep learning framework (eg, TensorFlow framework) runs on each task node of the second type, and the deep learning framework calls the trained deep learning classifier model to calculate the face image to recognize Corresponding face attributes.
以人脸属性为“性别”为例,利用所述深度学习框架对性别的计算过程包括:Taking the face attribute as "gender" as an example, the process of calculating gender using the deep learning framework includes:
a)获取所述人脸图像的性别特征参数并建立参数模型;a) Obtain the gender characteristic parameters of the face image and establish a parameter model;
其中,所述性别特征参数包括毛发(包括胡子)特征参数、脸部器官参数、轮廓参数以及性征特征参数等。以需要获取胡子特征参数为例,首先对人脸特征点定位,然后进行肤色分割,采用主动形状模型算法定位人脸特征点进而获取下巴区域,然后利用肤色分割算法分离出下巴的非肤色区域,最后在下巴非肤色区域中使用胡子颜色判别法检测得到胡子,从而对胡子的特征进行参数提取。其中,可以根据胡子的颜色的不同对所述人脸图像中的胡子赋予一特征值,例如,根据胡子的颜色或密度赋予一初始值,从而根据预设初始值获取所述胡子特征参数。Wherein, the gender characteristic parameters include hair (including beard) characteristic parameters, facial organ parameters, contour parameters and sexual characteristic characteristic parameters. Taking the need to obtain beard feature parameters as an example, first locate the facial feature points, and then perform skin color segmentation, use the active shape model algorithm to locate the facial feature points to obtain the chin area, and then use the skin color segmentation algorithm to separate the non-skin area of the chin. Finally, the beard color discrimination method is used to detect the beard in the non-skin area of the chin, and the parameters of the beard are extracted. Wherein, a characteristic value can be assigned to the beard in the face image according to the color of the beard, for example, an initial value can be assigned according to the color or density of the beard, so as to obtain the beard characteristic parameter according to the preset initial value.
在本实施方式中,可以通过局部二进制模式方法(Local Binary Patterns)、神经网络方法和SVM(Support Vector Machine,支持向量机)等方法对人脸图像进行特征提取和分类,从而获得所述性别特征参数。In this embodiment, facial image features can be extracted and classified by using local binary pattern methods (Local Binary Patterns), neural network methods, and SVM (Support Vector Machine, support vector machine) methods to obtain the gender characteristics parameter.
b)调用一人脸性别分类器模型;b) Call a face gender classifier model;
c)将所述参数模型与所述人脸性别分类器模型进行对比,从而识别所述人 脸图像的性别。c) Compare the parameter model with the face gender classifier model to identify the gender of the face image.
步骤S15,向所述主节点返回每一第二类任务节点的人脸属性的计算结果,触发所述主节点将所有人脸属性的计算结果进行汇总。Step S15: Return the calculation result of the face attributes of each task node of the second type to the master node, and trigger the master node to summarize the calculation results of the face attributes of all people.
例如,当所述第二类任务节点的计算结果分别为:性别为男、年龄为18-20岁、种族为汉族、表情为微笑,则所述主节点经过汇总便可得到所述人脸图像的不同人脸属性的计算结果。For example, when the calculation results of the second type of task nodes are respectively: gender is male, age is 18-20 years old, race is Han nationality, and expression is smile, then the master node can obtain the face image after being aggregated The calculation results of different face attributes.
在本实施方式中,所述向所述主节点返回每一第二类任务节点的人脸属性的计算结果包括:In this embodiment, the calculation result of returning the face attribute of each task node of the second type to the master node includes:
a)通过所述主节点轮询每一第二类任务节点,并向每一所述第二类任务节点发送一轮询请求报文,使得当所述第二类任务节点接收到所述轮询请求报文且至少一第二类任务节点对所述人脸属性计算完毕时,向所述主节点返回所述第二类任务节点的计算结果;a) Polling each task node of the second type through the master node, and sending a polling request message to each task node of the second type, so that when the task node of the second type receives the round Query request message and when at least one task node of the second type has finished calculating the face attribute, return the calculation result of the task node of the second type to the master node;
b)通过所述主节点定时向未返回计算结果的其它第二类任务节点继续发送所述轮询请求报文,使得所述其它第二类任务节点在对所述人脸属性计算完毕时,向所述主节点返回所述第二类任务节点的计算结果。b) The master node regularly sends the polling request message to other task nodes of the second type that have not returned the calculation result, so that when the other task nodes of the second type have finished calculating the face attributes, The calculation result of the task node of the second type is returned to the master node.
步骤S16,输出所述主节点的汇总结果。In step S16, the summary result of the master node is output.
在本实施方式中,所述计算机装置包括一显示屏,所述汇总结果显示于所述显示屏上。所述显示屏可以是液晶显示屏(Liquid Crystal Display,LCD)或有机发光二极管(Organic Light-Emitting Diode,OLED)显示屏。当然,在其它实施方式中,所述汇总结果还可通过上述计算机装置的麦克风进行播放。In this embodiment, the computer device includes a display screen, and the summary result is displayed on the display screen. The display screen may be a liquid crystal display (Liquid Crystal) (LCD) or an organic light-emitting diode (Organic Light-Emitting Diode, OLED) display. Of course, in other embodiments, the summary result can also be played through the microphone of the computer device.
上述图1详细介绍了本申请的人脸属性识别方法,下面结合第3-4图,对实现所述人脸属性识别方法的软件装置的功能模块以及实现所述人脸属性识别方法的硬件装置架构进行介绍。The above-mentioned FIG. 1 details the face attribute recognition method of the present application. In the following, with reference to FIGS. 3-4, the functional modules of the software device implementing the face attribute recognition method and the hardware device implementing the face attribute recognition method The architecture is introduced.
应该了解,所述实施例仅为说明之用,在专利申请范围上并不受此结构的限制。It should be understood that the described embodiments are for illustrative purposes only and are not limited by this structure in the scope of patent applications.
图3为本申请一较佳实施方式提供的人脸属性识别装置300的结构示意图。在一些实施例中,所述人脸属性识别装置300运行于计算机装置中。所述人脸属性识别装置300可以包括多个由程序代码段所组成的功能模块。所述人脸属性识别装置300的各个程序段的程序代码可以存储于计算机装置的存储器中,并由所述至少一个处理器所执行,以实现人脸属性识别功能。FIG. 3 is a schematic structural diagram of a face attribute recognition device 300 according to a preferred embodiment of the present application. In some embodiments, the face attribute recognition device 300 runs on a computer device. The face attribute recognition device 300 may include a plurality of function modules composed of program code segments. The program codes of each program segment of the face attribute recognition device 300 may be stored in the memory of the computer device and executed by the at least one processor to implement the face attribute recognition function.
本实施例中,所述人脸属性识别装置10根据其所执行的功能,可以被划分为多个功能模块。参阅图3所示,所述功能模块可以包括:构建模块101、分配模块102、提取模块103、返回模块104以及输出模块105。本申请所称的模块是指一种能够被至少一个处理器所执行并且能够完成固定功能的一系列计算机程序段,其存储在存储器中。在本实施例中,关于各模块的功能将在后续的实施例中详述。In this embodiment, the face attribute recognition device 10 may be divided into multiple functional modules according to the functions it performs. Referring to FIG. 3, the functional modules may include: a construction module 101, a distribution module 102, an extraction module 103, a return module 104, and an output module 105. The module referred to in this application refers to a series of computer program segments that can be executed by at least one processor and can perform fixed functions, and are stored in a memory. In this embodiment, the functions of each module will be described in detail in subsequent embodiments.
所述构建模块101用于构建一并行计算模型,所述并行计算模型包括一主节点以及与所述主节点连接的多个任务节点,所述构建模块101还用于为每一任务节点分配对应的存储空间,其中,所有所述任务节点被划分为一个第一类 任务节点以及至少两个第二类任务节点,每一第二类任务节点对应一人脸属性。The building module 101 is used to build a parallel computing model, the parallel computing model includes a master node and a plurality of task nodes connected to the master node, and the building module 101 is also used to assign a corresponding to each task node Storage space in which all the task nodes are divided into a first type task node and at least two second type task nodes, and each second type task node corresponds to a face attribute.
其中,所述并行计算模型可以采用随机存取并行机器(Parallel Random Access Machine,PRAM)模型、整体同步并行计算模型(Bulk Synchronous Parallel Computing Model,BSP)、LogP模型以及块分布存储(Block Distributed Model,BDM)模型中的其中一种。所述并行计算模型的主节点为待识别的目标图像的输入点以及任务分配节点TaskTrack。所述任务节点为人脸属性的运算点,即任务执行节点JobTrack。所述第一类任务节点用于计算及识别所述目标图像是否包括人脸图像。所述第二类任务节点用于当所述目标图像包括人脸图像时,计算并识别所述人脸图像的人脸属性。Among them, the parallel computing model may use a random access parallel machine (Parallel Random Access Machine, PRAM) model, an overall synchronous parallel computing model (Bulk Synchronous Parallel Computing Model, BSP), a LogP model, and a block distributed storage (Block Distributed Model, BDM) One of the models. The main node of the parallel computing model is the input point of the target image to be recognized and the task assignment node TaskTrack. The task node is an operation point of face attributes, that is, a task execution node JobTrack. The first type of task node is used to calculate and identify whether the target image includes a face image. The second type of task node is used to calculate and recognize the face attribute of the face image when the target image includes a face image.
其中,所述人脸属性包括性别、年龄、种族、表情等。所述第二类任务节点的个数可以根据所要计算的人脸属性的个数进行设置,所要计算的人脸属性越多,所述第二类任务节点的个数也就越多。如,所述并行计算模型可包括四个第二类任务节点,所述四个第二类任务节点分别对应的人脸属性为性别、年龄、种族以及表情。Wherein, the face attributes include gender, age, ethnicity, expression, etc. The number of task nodes of the second type can be set according to the number of face attributes to be calculated, and the more face attributes to be calculated, the more the number of task nodes of the second type. For example, the parallel computing model may include four task nodes of the second type, and the face attributes corresponding to the four task nodes of the second type are gender, age, race, and expression, respectively.
所述分配模块102用于通过所述主节点获取一目标图像,并将所述目标图像分配至所述第一类任务节点对应的存储空间中,触发所述第一类任务节点对所述目标图像进行计算,以识别所述目标图像中是否包括人脸图像。The allocation module 102 is configured to acquire a target image through the master node, and allocate the target image to a storage space corresponding to the first-type task node, to trigger the first-type task node to the target The image is calculated to identify whether the target image includes a face image.
其中,用户可通过一终端装置(如手机、平板电脑等)向所述主节点发送存储于所述终端装置的指定目录下的上述目标图像的处理请求。所述主节点在接收到所述处理请求时,通过所述指定目录获取所述目标图像。Wherein, the user can send the processing request of the target image stored in the designated directory of the terminal device to the master node through a terminal device (such as a mobile phone, a tablet computer, etc.). When receiving the processing request, the master node obtains the target image through the designated directory.
在本实施方式中,所述分配模块102在识别所述目标图像中是否包括人脸图像之前,还用于触发所述第一类任务节点对所述目标图像进行预处理以便排除光线等环境因素的干扰。In this embodiment, before identifying whether the target image includes a face image, the allocation module 102 is also used to trigger the first type of task node to pre-process the target image to exclude environmental factors such as light Interference.
由于光照的影响,所述目标图像在拍摄过程中存在许多不确定性,如光强、光源方向、色彩等,使得所述目标图像的灰度深浅不均匀,人脸在局部上对比度较大,从而影响最终识别的效果,所以有必要对获取的所述目标图像采用光线调整技术进行光线调整。Due to the influence of light, there are many uncertainties in the target image during the shooting process, such as light intensity, light source direction, color, etc., so that the gray scale of the target image is uneven, and the face has a large local contrast. As a result, the final recognition effect is affected, so it is necessary to use light adjustment technology to adjust the light to the acquired target image.
进一步地,所述分配模块102触发所述第一类任务节点扫描所述目标图像中是否包含人脸的预设局部特征的信息,当所述图像中包含人脸的预设局部特征的信息时,判断所述目标图像中包括人脸图像。Further, the assignment module 102 triggers the first-type task node to scan whether the target image contains information about preset local features of a human face, when the image contains information about preset local features of a human face To determine that the target image includes a face image.
具体地,所述预设局部特征可为鼻子、眼睛、嘴巴等,当扫描所述目标图像包括所述预设局部特征时,则判断所述目标图像中包括人脸图像。Specifically, the preset local feature may be a nose, eyes, mouth, etc. When scanning the target image includes the preset local feature, it is determined that the target image includes a face image.
所述提取模块103用于当所获取的目标图像中包括人脸图像时,通过所述第一类任务节点从所述目标图像中提取所述人脸图像以实现活体检测识别。所述返回模块104用于向所述主节点返回所述提取模块103所提取的所述人脸图像。The extraction module 103 is configured to extract the face image from the target image through the first-type task node when the acquired target image includes a face image to implement living body detection and recognition. The return module 104 is used to return the face image extracted by the extraction module 103 to the master node.
在本实施方式中,所述提取模块103通过所述第一类任务节点判断所述目标图像中包括的人脸图像的数量,当所述目标图像中包括的人脸图像的数量仅为一个时,所述第一类任务节点划定围绕所述人脸图像矩形边界,并根据所述矩形边界提取所述人脸图像;当所述目标图像中包括的人脸图像的数量为至少 两个时,所述第一类任务节点分别划定围绕每一人脸图像的矩形边界,计算每一矩形边界所界定的人脸图像的面积,并选择面积最大的其中一矩形边界提取所述人脸图像,然后通过所述第一类任务节点将所提取的人脸图像进行几何特性的归一化处理,从而从所述目标图像中提取所述人脸图像。所述几何特性的归一化处理可以使所述人脸图像归一化到相同的位置、角度和大小。由于人的两眼之间的距离对于大多数人来说是基本相同的,因此,两只眼睛的位置通常被用作人脸图像几何归一化的依据。In this embodiment, the extraction module 103 determines the number of face images included in the target image through the first type of task node, when the number of face images included in the target image is only one , The first type of task node delimits a rectangular border around the face image, and extracts the face image according to the rectangular border; when the number of face images included in the target image is at least two , The first type of task node delimits the rectangular boundary around each face image, calculates the area of the face image defined by each rectangular boundary, and selects one of the largest rectangular boundaries to extract the face image, Then, the extracted face image is subjected to normalization processing of geometric characteristics through the first-type task node, thereby extracting the face image from the target image. The normalization processing of the geometric characteristics can normalize the face image to the same position, angle and size. Since the distance between two eyes of a person is basically the same for most people, the position of the two eyes is usually used as the basis for geometric normalization of the face image.
具体地,如图2所示,假设人脸图像中两只眼睛的位置分别为El和Er,则所述提取模块103通过下述步骤,可以实现人脸图像的几何归一化:Specifically, as shown in FIG. 2, assuming that the positions of the two eyes in the face image are El and Er, respectively, the extraction module 103 can realize the geometric normalization of the face image through the following steps:
a)旋转所述人脸图像,以使E
l和E
r的连线
保持水平。这保证了人脸方向的一致性,体现了人脸在图像平面内的旋转不变性;
a) rotating the facial image, so that E l and E r connection maintain standard. This ensures the consistency of the direction of the face, and reflects the rotation invariance of the face in the image plane;
b)根据一定比例裁剪所述人脸图像。例如,设定图中点O为
的中点,且
经过裁剪,在2d×2d的图像内,可保证点O固定与(0.5d,d)处。这保证了人脸位置的一致性,体现了人脸在图像平面内的平移不变性;
b) Crop the face image according to a certain ratio. For example, set point O in the figure to Midpoint, and After cropping, in the 2d × 2d image, the point O can be guaranteed to be fixed at (0.5d, d). This ensures the consistency of the face position and reflects the translation invariance of the face in the image plane;
c)将裁剪后的图像缩小和放大处理,得到统一尺寸并符合标准的人脸图像。例如,若规定图像的大小是128×128像素点,也就是使
为定长(64个像素),则缩放倍数为β=2d/128。这保证了人脸大小的一致性,体现了人脸在图像平面内的尺度不变性。
c) Reduce and enlarge the cropped image to obtain a face image with a uniform size and conforming to the standard. For example, if the size of the specified image is 128 × 128 pixels, that is to say For a fixed length (64 pixels), the scaling factor is β = 2d / 128. This ensures the consistency of the size of the face, and reflects the scale invariance of the face in the image plane.
所述分配模块102还用于通过所述主节点将所述人脸图像分配至每一第二类任务节点对应的存储空间中,触发所有所述第二类任务节点对所存储的人脸图像的所有人脸属性进行并行计算,从而获得所有人脸属性的计算结果。The allocation module 102 is further configured to allocate the face image to the storage space corresponding to each second-type task node through the master node, and trigger all the second-type task nodes to store the face image The face attributes of all people are calculated in parallel, so as to obtain the calculation result of all face attributes.
在本实施方式中,每一第二类任务节点上运行有深度学习框架(如,TensorFlow框架),所述深度学习框架调用训练好的深度学习分类器模型对所述人脸图像进行计算以识别对应的人脸属性。In this embodiment, a deep learning framework (eg, TensorFlow framework) runs on each task node of the second type, and the deep learning framework calls the trained deep learning classifier model to calculate the face image to recognize Corresponding face attributes.
以人脸属性为“性别”为例,所述分配模块102获取所述人脸图像的性别特征参数并建立参数模型,调用一人脸性别分类器模型,将所述参数模型与所述人脸性别分类器模型进行对比,从而利用所述深度学习框架对性别进行计算。Taking the face attribute as “gender” as an example, the distribution module 102 obtains the gender characteristic parameters of the face image and establishes a parameter model, calls a face gender classifier model, and compares the parameter model and the face gender The classifier models are compared to calculate the gender using the deep learning framework.
其中,所述性别特征参数包括毛发(包括胡子)特征参数、脸部器官参数、轮廓参数以及性征特征参数等。以需要获取胡子特征参数为例,首先对人脸特征点定位,然后进行肤色分割,采用主动形状模型算法定位人脸特征点进而获取下巴区域,然后利用肤色分割算法分离出下巴的非肤色区域,最后在下巴非肤色区域中使用胡子颜色判别法检测得到胡子,从而对胡子的特征进行参数提取。其中,可以根据胡子的颜色的不同对所述人脸图像中的胡子赋予一特征值,例如,根据胡子的颜色或密度赋予一初始值,从而根据预设初始值获取所述胡子特征参数。Wherein, the gender characteristic parameters include hair (including beard) characteristic parameters, facial organ parameters, contour parameters and sexual characteristic characteristic parameters. Taking the need to obtain beard feature parameters as an example, first locate the facial feature points, and then perform skin color segmentation, use the active shape model algorithm to locate the facial feature points to obtain the chin area, and then use the skin color segmentation algorithm to separate the chin non-skin color area. Finally, the beard color discrimination method is used to detect the beard in the non-skin area of the chin, and the parameters of the beard are extracted. Wherein, a characteristic value can be assigned to the beard in the face image according to the color of the beard, for example, an initial value can be assigned according to the color or density of the beard, so as to obtain the beard characteristic parameter according to the preset initial value.
在本实施方式中,可以通过局部二进制模式方法(Local Binary Patterns)、神经网络方法和SVM(Support Vector Machine,支持向量机)等方法对人脸图像进行特征提取和分类,从而获得所述性别特征参数。In this embodiment, facial image features can be extracted and classified by using local binary pattern methods (Local Binary Patterns), neural network methods, and SVM (Support Vector Machine, support vector machine) methods to obtain the gender characteristics parameter.
所述返回模块104还用于向所述主节点返回每一第二类任务节点的人脸属 性的计算结果,触发所述主节点将所有人脸属性的计算结果进行汇总。The return module 104 is further used to return the calculation result of the face attributes of each task node of the second type to the master node, and trigger the master node to summarize the calculation results of the face attributes of all people.
例如,当所述第二类任务节点的计算结果分别为:性别为男、年龄为18-20岁、种族为汉族、表情为微笑,则所述主节点经过汇总便可得到所述人脸图像的不同人脸属性的计算结果。For example, when the calculation results of the second type of task nodes are respectively: gender is male, age is 18-20 years old, race is Han nationality, and expression is smile, then the master node can obtain the face image after being aggregated The calculation results of different face attributes.
在本实施方式中,所述返回模块104通过所述主节点轮询每一第二类任务节点,并向每一所述第二类任务节点发送一轮询请求报文,使得当接收到所述轮询请求报文且至少一第二类任务节点对所述人脸属性计算完毕时,向所述主节点返回所述第二类任务节点的计算结果,所述返回模块104还通过所述主节点定时向未返回计算结果的其它第二类任务节点继续发送所述轮询请求报文,使得所述其它第二类任务节点在对所述人脸属性计算完毕时,向所述主节点返回所述第二类任务节点的计算结果。In this embodiment, the return module 104 polls each task node of the second type through the master node, and sends a poll request message to each task node of the second type, so that when the When the polling request message is completed and at least one task node of the second type completes calculating the face attribute, the calculation result of the task node of the second type is returned to the master node, and the return module 104 further passes the The master node regularly sends the polling request message to other task nodes of the second type that have not returned the calculation result, so that the other task nodes of the second type send the request to the master node when the face attributes are calculated. The calculation result of the task node of the second type is returned.
所述输出模块105用于输出所述主节点的汇总结果。The output module 105 is used to output the summary result of the master node.
在本实施方式中,所述计算机装置包括一显示屏,所述汇总结果显示于所述显示屏上。所述显示屏可以是液晶显示屏(Liquid Crystal Display,LCD)或有机发光二极管(Organic Light-Emitting Diode,OLED)显示屏。当然,在其它实施方式中,所述汇总结果还可通过所述计算机装置的麦克风进行播放。In this embodiment, the computer device includes a display screen, and the summary result is displayed on the display screen. The display screen may be a liquid crystal display (Liquid Crystal) (LCD) or an organic light-emitting diode (Organic Light-Emitting Diode, OLED) display. Of course, in other embodiments, the summary result can also be played through the microphone of the computer device.
如前所述,本申请实施例中的人脸属性识别装置,通过所述并行计算模型并行计算所述人脸图像的不同人脸属性,有利于提高运算效率;后续所述人脸属性识别方法可导入各种公共场合,例如:车站、地铁、商场、超市,通过获取该些场合的摄像头采集的目标图像,然后进行各项人脸属性的识别,并将识别结果与特定人群(如犯罪人员)的人脸属性的识别结果做比对,从而根据匹配度高效筛选目标人群。As described above, the face attribute recognition device in the embodiment of the present application, through the parallel computing model, calculates different face attributes of the face image in parallel, which is beneficial to improve the operation efficiency; the subsequent face attribute recognition method It can be imported into various public occasions, such as stations, subways, shopping malls, and supermarkets, by acquiring the target images collected by the cameras of these occasions, and then recognizing various facial attributes, and comparing the recognition results with specific people (such as criminals) ) The recognition results of the face attributes are compared, so as to efficiently screen the target population according to the matching degree.
如图4所示,图4是本申请一较佳实施方式中实现所述人脸属性识别方法的计算机装置1的结构示意图。所述计算机装置1包括存储器20、处理器30以及存储于所述存储器20中并可在所述处理器30上运行的计算机可读指令40,例如人脸属性识别程序。As shown in FIG. 4, FIG. 4 is a schematic structural diagram of a computer device 1 that implements the face attribute recognition method in a preferred embodiment of the present application. The computer device 1 includes a memory 20, a processor 30, and computer-readable instructions 40 stored in the memory 20 and executable on the processor 30, such as a face attribute recognition program.
所述处理器30执行所述计算机可读指令40时实现上述实施例中人脸属性识别方法的步骤:When the processor 30 executes the computer-readable instructions 40, the steps of the face attribute recognition method in the above embodiment are implemented:
步骤S11:构建一并行计算模型,所述并行计算模型包括一主节点以及与所述主节点连接的多个任务节点,为每一任务节点分配对应的存储空间,其中,所有所述任务节点被划分为一个第一类任务节点以及至少两个第二类任务节点,每一第二类任务节点对应一人脸属性。Step S11: Construct a parallel computing model. The parallel computing model includes a master node and a plurality of task nodes connected to the master node, and a corresponding storage space is allocated to each task node, where all the task nodes are It is divided into a first type task node and at least two second type task nodes, and each second type task node corresponds to a face attribute.
其中,所述并行计算模型可以采用随机存取并行机器(Parallel Random Access Machine,PRAM)模型、整体同步并行计算模型(Bulk Synchronous Parallel Computing Model,BSP)、LogP模型以及块分布存储(Block Distributed Model,BDM)模型中的其中一种。所述并行计算模型的主节点为待识别的目标图像的输入点以及任务分配节点TaskTrack。所述任务节点为人脸属性的运算点,即任务执行节点JobTrack。所述第一类任务节点用于计算及识别所述目标图像是否包括人脸图像。所述第二类任务节点用于当所述目标图像包括人脸图像时,计算并识别所述人脸图像的人脸属性。Among them, the parallel computing model may use a random access parallel machine (Parallel Random Access Machine, PRAM) model, an overall synchronous parallel computing model (Bulk Synchronous Parallel Computing Model, BSP), a LogP model and a block distributed storage (Block Distributed Model, BDM) One of the models. The main node of the parallel computing model is the input point of the target image to be recognized and the task assignment node TaskTrack. The task node is an operation point of face attributes, that is, a task execution node JobTrack. The first type of task node is used to calculate and identify whether the target image includes a face image. The second type of task node is used to calculate and recognize the face attribute of the face image when the target image includes a face image.
其中,所述人脸属性包括性别、年龄、种族、表情等。所述第二类任务节点的个数可以根据所要计算的人脸属性的个数进行设置,所要计算的人脸属性越多,所述第二类任务节点的个数也就越多。如,所述并行计算模型可包括四个第二类任务节点,所述四个第二类任务节点分别对应的人脸属性为性别、年龄、种族以及表情。Wherein, the face attributes include gender, age, ethnicity, expression, etc. The number of task nodes of the second type can be set according to the number of face attributes to be calculated, and the more face attributes to be calculated, the more the number of task nodes of the second type. For example, the parallel computing model may include four task nodes of the second type, and the face attributes corresponding to the four task nodes of the second type are gender, age, race, and expression, respectively.
步骤S12:通过所述主节点获取一目标图像,并将所述目标图像分配至所述第一类任务节点对应的存储空间中,触发所述第一类任务节点对所述目标图像进行计算,以识别所述目标图像中是否包括人脸图像。Step S12: Obtain a target image through the master node, and allocate the target image to the storage space corresponding to the first-type task node to trigger the first-type task node to calculate the target image, To identify whether a face image is included in the target image.
其中,用户可通过一终端装置(如手机、平板电脑等)向所述主节点发送存储于所述终端装置的指定目录下的上述目标图像的处理请求。所述主节点在接收到所述处理请求时,通过所述指定目录获取所述目标图像。Wherein, the user can send the processing request of the target image stored in the designated directory of the terminal device to the master node through a terminal device (such as a mobile phone, a tablet computer, etc.). When receiving the processing request, the master node obtains the target image through the designated directory.
在本实施方式中,所述识别所述目标图像中是否包括人脸图像之前,还包括:触发所述第一类任务节点对所述目标图像进行预处理以便排除光线等环境因素的干扰。In this embodiment, before identifying whether the target image includes a face image, the method further includes: triggering the first type of task node to perform preprocessing on the target image to exclude interference from environmental factors such as light.
由于光照的影响,所述目标图像在拍摄过程中存在许多不确定性,如光强、光源方向、色彩等,使得所述目标图像的灰度深浅不均匀,人脸在局部上对比度较大,从而影响最终识别的效果,所以有必要对获取的所述目标图像采用光线调整技术进行光线调整。Due to the influence of light, there are many uncertainties in the target image during the shooting process, such as light intensity, light source direction, color, etc., so that the gray scale of the target image is uneven, and the face has a large local contrast. As a result, the final recognition effect is affected, so it is necessary to use light adjustment technology to adjust the light to the acquired target image.
进一步地,所述识别所述目标图像中是否包括人脸图像包括:Further, the recognizing whether the target image includes a face image includes:
a)触发所述第一类任务节点扫描所述目标图像中是否包含人脸的预设局部特征的信息;a) Trigger the first type of task node to scan whether the target image contains information of preset local features of a human face;
b)当所述图像中包含人脸的预设局部特征的信息时,判断所述目标图像中包括人脸图像。b) When the image contains information on preset local features of a human face, it is determined that the target image includes a human face image.
具体地,所述预设局部特征可为鼻子、眼睛、嘴巴等,当扫描所述目标图像包括所述预设局部特征时,则判断所述目标图像中包括人脸图像。Specifically, the preset local feature may be a nose, eyes, mouth, etc. When scanning the target image includes the preset local feature, it is determined that the target image includes a face image.
步骤S13:当所获取的目标图像中包括人脸图像时,通过所述第一类任务节点从所述目标图像中提取所述人脸图像以实现活体检测识别,并向所述主节点返回所提取的所述人脸图像。Step S13: When the acquired target image includes a face image, extract the face image from the target image through the first-type task node to realize living body detection and recognition, and return the extracted to the master node The face image.
在本实施方式中,所述通过所述第一类任务节点从所述目标图像中提取所述人脸图像包括:In this embodiment, the extracting the face image from the target image through the first type of task node includes:
a)通过所述第一类任务节点判断所述目标图像中包括的人脸图像的数量;a) judging the number of face images included in the target image by the first type of task node;
b)当所述目标图像中包括的人脸图像的数量仅为一个时,通过所述第一类任务节点划定围绕所述人脸图像矩形边界,并根据所述矩形边界提取所述人脸图像;b) When the number of face images included in the target image is only one, the first type of task node delimits a rectangular boundary around the face image, and extracts the human face according to the rectangular boundary image;
c)当所述目标图像中包括的人脸图像的数量为至少两个时,通过所述第一类任务节点分别划定围绕每一人脸图像的矩形边界,计算每一矩形边界所界定的人脸图像的面积,并选择面积最大的其中一矩形边界提取所述人脸图像;c) When the number of face images included in the target image is at least two, the rectangular boundaries around each face image are delimited by the first-type task nodes, and the people defined by each rectangular boundary are calculated The area of the face image, and select one of the largest rectangular boundaries to extract the face image;
d)通过所述第一类任务节点将所提取的人脸图像进行几何特性的归一化处理,所述几何特性的归一化处理可以使所述人脸图像归一化到相同的位置、角度和大小。由于人的两眼之间的距离对于大多数人来说是基本相同的,因此, 两只眼睛的位置通常被用作人脸图像几何归一化的依据。d) Normalize the geometric characteristics of the extracted face image through the first type of task node, the normalization of the geometric characteristics can normalize the face image to the same position, Angle and size. Since the distance between two eyes of a person is basically the same for most people, the position of the two eyes is usually used as the basis for geometric normalization of the face image.
具体地,如图2所示,假设人脸图像中两只眼睛的位置分别为El和Er,则通过下述步骤,可以实现人脸图像的几何归一化:Specifically, as shown in FIG. 2, assuming that the positions of the two eyes in the face image are El and Er, respectively, the geometric normalization of the face image can be achieved through the following steps:
a)旋转所述人脸图像,以使E
l和E
r的连线
保持水平。这保证了人脸方向的一致性,体现了人脸在图像平面内的旋转不变性;
a) rotating the facial image, so that E l and E r connection maintain standard. This ensures the consistency of the direction of the face, and reflects the rotation invariance of the face in the image plane;
b)根据一定比例裁剪所述人脸图像。例如,设定图中点O为
的中点,且
经过裁剪,在2d×2d的图像内,可保证点O固定与(0.5d,d)处。这保证了人脸位置的一致性,体现了人脸在图像平面内的平移不变性;
b) Crop the face image according to a certain ratio. For example, set point O in the figure to Midpoint, and After cropping, in the 2d × 2d image, the point O can be guaranteed to be fixed at (0.5d, d). This ensures the consistency of the face position and reflects the translation invariance of the face in the image plane;
c)将裁剪后的图像缩小和放大处理,得到统一尺寸并符合标准的人脸图像。例如,若规定图像的大小是128×128像素点,也就是使
为定长(64个像素),则缩放倍数为β=2d/128。这保证了人脸大小的一致性,体现了人脸在图像平面内的尺度不变性。
c) Reduce and enlarge the cropped image to obtain a face image with a uniform size and conforming to the standard. For example, if the size of the specified image is 128 × 128 pixels, that is to say For a fixed length (64 pixels), the scaling factor is β = 2d / 128. This ensures the consistency of the size of the face, and reflects the scale invariance of the face in the image plane.
步骤S14,通过所述主节点将所述人脸图像分配至每一第二类任务节点对应的存储空间中,触发所有所述第二类任务节点对所存储的人脸图像的所有人脸属性进行并行计算,从而获得所有人脸属性的计算结果。Step S14, the face image is allocated to the storage space corresponding to each task node of the second type through the master node, and all face attributes of the stored face images are triggered by all the task nodes of the second type Perform parallel calculations to obtain the calculation results of all face attributes.
在本实施方式中,每一第二类任务节点上运行有深度学习框架(如,TensorFlow框架),所述深度学习框架调用训练好的深度学习分类器模型对所述人脸图像进行计算以识别对应的人脸属性。In this embodiment, a deep learning framework (eg, TensorFlow framework) runs on each task node of the second type, and the deep learning framework calls the trained deep learning classifier model to calculate the face image to recognize Corresponding face attributes.
以人脸属性为“性别”为例,利用所述深度学习框架对性别的计算过程包括:Taking the face attribute as "gender" as an example, the process of calculating gender using the deep learning framework includes:
a)获取所述人脸图像的性别特征参数并建立参数模型;a) Obtain the gender characteristic parameters of the face image and establish a parameter model;
其中,所述性别特征参数包括毛发(包括胡子)特征参数、脸部器官参数、轮廓参数以及性征特征参数等。以需要获取胡子特征参数为例,首先对人脸特征点定位,然后进行肤色分割,采用主动形状模型算法定位人脸特征点进而获取下巴区域,然后利用肤色分割算法分离出下巴的非肤色区域,最后在下巴非肤色区域中使用胡子颜色判别法检测得到胡子,从而对胡子的特征进行参数提取。其中,可以根据胡子的颜色的不同对所述人脸图像中的胡子赋予一特征值,例如,根据胡子的颜色或密度赋予一初始值,从而根据预设初始值获取所述胡子特征参数。Wherein, the gender characteristic parameters include hair (including beard) characteristic parameters, facial organ parameters, contour parameters and sexual characteristic characteristic parameters. Taking the need to obtain beard feature parameters as an example, first locate the facial feature points, and then perform skin color segmentation, use the active shape model algorithm to locate the facial feature points to obtain the chin area, and then use the skin color segmentation algorithm to separate the chin non-skin color area Finally, the beard color discrimination method is used to detect the beard in the non-skin area of the chin, and the parameters of the beard are extracted. Wherein, a characteristic value can be assigned to the beard in the face image according to the color of the beard, for example, an initial value can be assigned according to the color or density of the beard, so as to obtain the beard characteristic parameter according to the preset initial value.
在本实施方式中,可以通过局部二进制模式方法(Local Binary Patterns)、神经网络方法和SVM(Support Vector Machine,支持向量机)等方法对人脸图像进行特征提取和分类,从而获得所述性别特征参数。In this embodiment, facial image features can be extracted and classified by using local binary pattern methods (Local Binary Patterns), neural network methods, and SVM (Support Vector Machine, support vector machine) methods to obtain the gender characteristics parameter.
b)调用一人脸性别分类器模型;b) Call a face gender classifier model;
c)将所述参数模型与所述人脸性别分类器模型进行对比,从而识别所述人脸图像的性别。c) Compare the parameter model with the face gender classifier model to identify the gender of the face image.
步骤S15,向所述主节点返回每一第二类任务节点的人脸属性的计算结果,触发所述主节点将所有人脸属性的计算结果进行汇总。Step S15: Return the calculation result of the face attributes of each task node of the second type to the master node, and trigger the master node to summarize the calculation results of the face attributes of all people.
例如,当所述第二类任务节点的计算结果分别为:性别为男、年龄为18-20岁、种族为汉族、表情为微笑,则所述主节点经过汇总便可得到所述人脸图像 的不同人脸属性的计算结果。For example, when the calculation results of the second type of task nodes are respectively: gender is male, age is 18-20 years old, race is Han nationality, and expression is smile, then the master node can obtain the face image after being aggregated The calculation results of different face attributes.
在本实施方式中,所述向所述主节点返回每一第二类任务节点的人脸属性的计算结果包括:In this embodiment, the calculation result of returning the face attribute of each task node of the second type to the master node includes:
a)通过所述主节点轮询每一第二类任务节点,并向每一所述第二类任务节点发送一轮询请求报文,使得当所述第二类任务节点接收到所述轮询请求报文且至少一第二类任务节点对所述人脸属性计算完毕时,向所述主节点返回所述第二类任务节点的计算结果;a) Polling each task node of the second type through the master node, and sending a polling request message to each task node of the second type, so that when the task node of the second type receives the round Query request message and when at least one task node of the second type has finished calculating the face attribute, return the calculation result of the task node of the second type to the master node;
b)通过所述主节点定时向未返回计算结果的其它第二类任务节点继续发送所述轮询请求报文,使得所述其它第二类任务节点在对所述人脸属性计算完毕时,向所述主节点返回所述第二类任务节点的计算结果。b) The master node regularly sends the polling request message to other task nodes of the second type that have not returned the calculation result, so that when the other task nodes of the second type have finished calculating the face attributes, The calculation result of the task node of the second type is returned to the master node.
步骤S16,输出所述主节点的汇总结果。In step S16, the summary result of the master node is output.
在本实施方式中,所述计算机装置包括一显示屏,所述汇总结果显示于所述显示屏上。所述显示屏可以是液晶显示屏(Liquid Crystal Display,LCD)或有机发光二极管(Organic Light-Emitting Diode,OLED)显示屏。当然,在其它实施方式中,所述汇总结果还可通过上述计算机装置的麦克风进行播放。In this embodiment, the computer device includes a display screen, and the summary result is displayed on the display screen. The display screen may be a liquid crystal display (Liquid Crystal) (LCD) or an organic light-emitting diode (Organic Light-Emitting Diode, OLED) display. Of course, in other embodiments, the summary result can also be played through the microphone of the computer device.
或者,所述处理器30执行所述计算机可读指令40时实现上述人脸属性识别装置实施例中各模块/单元的功能,例如图3中的单元101-105。Alternatively, when the processor 30 executes the computer-readable instruction 40, the functions of each module / unit in the embodiment of the face attribute recognition device described above, such as the units 101-105 in FIG. 3, are implemented.
示例性的,所述计算机可读指令40可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器20中,并由所述处理器30执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机可读指令段,该指令段用于描述所述计算机可读指令40在所述计算机装置1中的执行过程。例如,所述计算机可读指令40可以被分割成图3中的构建模块101、分配模块102、提取模块103、返回模块104以及输出模块105。Exemplarily, the computer-readable instructions 40 may be divided into one or more modules / units, the one or more modules / units are stored in the memory 20 and executed by the processor 30, To complete this application. The one or more modules / units may be a series of computer-readable instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer-readable instructions 40 in the computer device 1. For example, the computer-readable instructions 40 may be divided into a building module 101, an allocation module 102, an extraction module 103, a return module 104, and an output module 105 in FIG.
所述计算机装置1可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。本领域技术人员可以理解,所述示意图仅仅是计算机装置1的示例,并不构成对计算机装置1的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述计算机装置1还可以包括输入输出设备、网络接入设备、总线等。The computer device 1 may be a computing device such as a desktop computer, a notebook, a palmtop computer and a cloud server. A person skilled in the art may understand that the schematic diagram is only an example of the computer device 1 and does not constitute a limitation on the computer device 1, and may include more or less components than the illustration, or a combination of certain components, or different Components, for example, the computer device 1 may also include input and output devices, network access devices, buses, and the like.
所称处理器30可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器30也可以是任何常规的处理器等,所述处理器30是所述计算机装置1的控制中心,利用各种接口和线路连接整个计算机装置1的各个部分。The so-called processor 30 may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application-specific integrated circuits (Application Specific Integrated Circuit, ASIC), Ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. The general-purpose processor may be a microprocessor or the processor 30 may also be any conventional processor, etc. The processor 30 is the control center of the computer device 1 and connects the entire computer device 1 using various interfaces and lines Various parts.
所述存储器20可用于存储所述计算机可读指令40和/或模块/单元,所述处理器30通过运行或执行存储在所述存储器20内的计算机可读指令和/或模块/单元,以及调用存储在存储器20内的数据,实现所述计算机装置1的各种功能。所述存储器20可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能 等)等;存储数据区可存储根据计算机装置1的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器20可以包括高速随机存取存储器,还可以包括非易失性存储器,例如硬盘、内存、插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)、至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。The memory 20 may be used to store the computer-readable instructions 40 and / or modules / units, and the processor 30 executes or executes the computer-readable instructions and / or modules / units stored in the memory 20, and The data stored in the memory 20 is called to realize various functions of the computer device 1. The memory 20 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one function required application programs (such as sound playback function, image playback function, etc.); the storage data area may Data (such as audio data, phone book, etc.) created according to the use of the computer device 1 is stored. In addition, the memory 20 may include a high-speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a smart memory card (Smart, Media, Card, SMC), and a secure digital (SD) Card, flash memory card (Flash), at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
所述计算机装置1集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个非易失性可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性可读存储介质中,该计算机可读指令在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机可读指令包括计算机可读指令代码,所述计算机可读指令代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述非易失性可读介质可以包括:能够携带所述计算机可读指令代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述非易失性可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,非易失性可读介质不包括电载波信号和电信信号。If the module / unit integrated in the computer device 1 is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a non-volatile readable storage medium. Based on this understanding, the present application can implement all or part of the processes in the methods of the above embodiments, and can also be completed by instructing relevant hardware through computer-readable instructions. When reading the storage medium, the computer-readable instructions, when executed by the processor, can implement the steps of the foregoing method embodiments. Wherein, the computer readable instructions include computer readable instruction codes, and the computer readable instruction codes may be in source code form, object code form, executable file, or some intermediate form, etc. The non-volatile readable medium may include: any entity or device capable of carrying the computer-readable instruction code, a recording medium, a U disk, a mobile hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. It should be noted that the content contained in the non-volatile readable medium can be appropriately increased or decreased according to the requirements of legislation and patent practice in jurisdictions. For example, in some jurisdictions, according to legislation and patent practice, non- Volatile readable media does not include electrical carrier signals and telecommunication signals.
在本申请所提供的几个实施例中,应该理解到,所揭露的计算机装置和方法,可以通过其它的方式实现。例如,以上所描述的计算机装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。In the several embodiments provided in this application, it should be understood that the disclosed computer device and method may be implemented in other ways. For example, the computer device embodiments described above are only schematic. For example, the division of the unit is only a logical function division, and there may be other division manners in actual implementation.
另外,在本申请各个实施例中的各功能单元可以集成在相同处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在相同单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能模块的形式实现。In addition, the functional units in the embodiments of the present application may be integrated in the same processing unit, or each unit may exist alone physically, or two or more units may be integrated in the same unit. The above integrated unit can be implemented in the form of hardware, or in the form of hardware plus software function modules.
对于本领域技术人员而言,显然本申请不限于上述示范性实施例的细节,而且在不背离本申请的精神或基本特征的情况下,能够以其他的具体形式实现本申请。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本申请的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化涵括在本申请内。不应将权利要求中的任何附图标记视为限制所涉及的权利要求。此外,显然“包括”一词不排除其他单元或步骤,单数不排除复数。计算机装置权利要求中陈述的多个单元或计算机装置也可以由同一个单元或计算机装置通过软件或者硬件来实现。第一,第二等词语用来表示名称,而并不表示任何特定的顺序。It is obvious to those skilled in the art that the present application is not limited to the details of the above exemplary embodiments, and that the present application can be implemented in other specific forms without departing from the spirit or basic characteristics of the present application. Therefore, no matter from which point of view, the embodiments should be regarded as exemplary and non-limiting, the scope of the present application is defined by the appended claims rather than the above description, and is therefore intended to fall within the claims All changes within the meaning and scope of the equivalent requirements are included in this application. Any reference signs in the claims should not be considered as limiting the claims involved. In addition, it is clear that the word "include" does not exclude other units or steps, and the singular does not exclude the plural. Multiple units or computer devices stated in the computer device claims may also be implemented by the same unit or computer device through software or hardware. The first and second words are used to indicate names, but do not indicate any particular order.
最后应说明的是,以上实施例仅用以说明本申请的技术方案而非限制,尽管参照较佳实施例对本申请进行了详细说明,本领域的普通技术人员应当理解,可以对本申请的技术方案进行修改或等同替换,而不脱离本申请技术 方案的精神和范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present application and not to limit them. Although the application is described in detail with reference to the preferred embodiments, those of ordinary skill in the art should understand that the technical solutions of the present application Make modifications or equivalent replacements without departing from the spirit and scope of the technical solutions of the present application.
Claims (20)
- 一种人脸属性识别方法,其特征在于,包括:A face attribute recognition method, characterized in that it includes:构建一并行计算模型,所述并行计算模型包括一主节点以及与所述主节点连接的多个任务节点,为每一任务节点分配对应的存储空间,其中,所有所述任务节点被划分为一个第一类任务节点以及至少两个第二类任务节点,每一第二类任务节点对应一人脸属性;Build a parallel computing model, the parallel computing model includes a master node and a plurality of task nodes connected to the master node, each task node is allocated a corresponding storage space, wherein all the task nodes are divided into one A first type task node and at least two second type task nodes, each second type task node corresponds to a face attribute;通过所述主节点获取一目标图像,并将所述目标图像分配至所述第一类任务节点对应的存储空间中,触发所述第一类任务节点对所述目标图像进行计算,以识别所述目标图像中是否包括人脸图像;Acquiring a target image through the master node, and allocating the target image to the storage space corresponding to the first-type task node, triggering the first-type task node to calculate the target image to identify the target image Whether the target image includes a face image;当所获取的目标图像中包括人脸图像时,通过所述第一类任务节点从所述目标图像中提取所述人脸图像,并向所述主节点返回所提取的所述人脸图像;When the acquired target image includes a face image, extract the face image from the target image through the first-type task node, and return the extracted face image to the master node;通过所述主节点将所述人脸图像分配至每一第二类任务节点对应的存储空间中,触发所有所述第二类任务节点对所存储的人脸图像的所有人脸属性进行并行计算,从而获得所有人脸属性的计算结果;以及Assigning the face image to the storage space corresponding to each second-type task node through the master node, triggering all the second-type task nodes to perform parallel calculation on all face attributes of the stored face images To obtain the calculation result of all face attributes; and向所述主节点返回每一第二类任务节点的人脸属性的计算结果,触发所述主节点将所有人脸属性的计算结果进行汇总。Returning the calculation result of the face attributes of each task node of the second type to the master node, triggering the master node to summarize the calculation results of the face attributes of all people.
- 如权利要求1所述的人脸属性识别方法,其特征在于,还包括:The face attribute recognition method according to claim 1, further comprising:输出所述主节点的汇总结果。The summary result of the master node is output.
- 如权利要求1所述的人脸属性识别方法,其特征在于,所述识别所述目标图像中是否包括人脸图像包括:The face attribute recognition method according to claim 1, wherein the recognizing whether the target image includes a face image includes:触发所述第一类任务节点扫描所述目标图像中是否包含人脸的预设局部特征的信息;以及Trigger the first type of task node to scan whether the target image contains information of preset local features of a human face; and当所述图像中包含人脸的预设局部特征的信息时,判断所述目标图像中包括人脸图像。When the image contains information on preset local features of a human face, it is determined that the target image includes a human face image.
- 如权利要求1所述的人脸属性识别方法,其特征在于,The face attribute recognition method according to claim 1, wherein:所述通过所述第一类任务节点从所述目标图像中提取所述人脸图像包括:The extracting the face image from the target image through the first type of task node includes:通过所述第一类任务节点判断所述目标图像中包括的人脸图像的数量;Judging the number of face images included in the target image through the first type of task node;当所述目标图像中包括的人脸图像的数量仅为一个时,通过所述第一类任务节点划定围绕所述人脸图像矩形边界,并根据所述矩形边界提取所述人脸图像;When the number of face images included in the target image is only one, the first type of task node delimits a rectangular boundary around the face image, and extracts the face image according to the rectangular boundary;当所述目标图像中包括的人脸图像的数量为至少两个时,通过所述第一类任务节点分别划定围绕每一人脸图像的矩形边界,计算每一矩形边界所界定的人脸图像的面积,并选择面积最大的其中一矩形边界提取所述人脸图像;以及When the number of face images included in the target image is at least two, the rectangular boundaries surrounding each face image are respectively delineated by the first-type task nodes, and the face images defined by each rectangular boundary are calculated Area, and select one of the largest rectangular boundaries to extract the face image; and通过所述第一类任务节点将所提取的人脸图像进行几何特性的归一化 处理,从而使所述人脸图像归一化到相同的位置、角度和大小。The extracted face images are subjected to normalization of geometric characteristics through the first type of task nodes, so that the face images are normalized to the same position, angle and size.
- 如权利要求1所述的人脸属性识别方法,其特征在于,所述向所述主节点返回每一第二类任务节点的人脸属性的计算结果包括:The face attribute recognition method according to claim 1, wherein the calculation result of returning the face attribute of each task node of the second type to the master node includes:通过所述主节点轮询每一第二类任务节点,并向每一所述第二类任务节点发送一轮询请求报文,使得当所述第二类任务节点接收到所述轮询请求报文且至少一第二类任务节点对所述人脸属性计算完毕时,向所述主节点返回所述第二类任务节点的计算结果;以及Polling each task node of the second type through the master node, and sending a polling request message to each task node of the second type, so that when the task node of the second type receives the polling request Message and when at least one task node of the second type has calculated the face attribute, return the calculation result of the task node of the second type to the master node; and通过所述主节点定时向未返回计算结果的其它第二类任务节点继续发送所述轮询请求报文,使得所述其它第二类任务节点在对所述人脸属性计算完毕时,向所述主节点返回所述第二类任务节点的计算结果。The master node periodically sends the polling request message to the other second-type task nodes that have not returned the calculation result, so that the other second-type task nodes, after completing the calculation of the face attributes, The master node returns the calculation result of the task node of the second type.
- 如权利要求1所述的人脸属性识别方法,其特征在于,所述人脸属性包括性别、年龄、种族以及表情。The face attribute recognition method according to claim 1, wherein the face attributes include gender, age, race, and expression.
- 如权利要求1所述的人脸属性识别方法,其特征在于,所述并行计算模型采用随机存取并行机器模型、整体同步并行计算模型、LogP模型以及块分布存储模型中的其中一种。The face attribute recognition method according to claim 1, wherein the parallel computing model uses one of a random access parallel machine model, an overall synchronous parallel computing model, a LogP model, and a block distributed storage model.
- 一种人脸属性识别装置,其特征在于,包括:A face attribute recognition device is characterized by comprising:构建模块,用于构建一并行计算模型,所述并行计算模型包括一主节点以及与所述主节点连接的多个任务节点,为每一任务节点分配对应的存储空间,其中,所有所述任务节点被划分为一个第一类任务节点以及至少两个第二类任务节点,每一第二类任务节点对应一人脸属性;The building module is used to build a parallel computing model, the parallel computing model includes a master node and a plurality of task nodes connected to the master node, and a corresponding storage space is allocated to each task node, wherein all the tasks The node is divided into a first type task node and at least two second type task nodes, each second type task node corresponds to a face attribute;分配模块,用于通过所述主节点获取一目标图像,并将所述目标图像分配至所述第一类任务节点对应的存储空间中,触发所述第一类任务节点对所述目标图像进行计算,以识别所述目标图像中是否包括人脸图像;An allocation module, configured to acquire a target image through the master node, and allocate the target image to the storage space corresponding to the first-type task node, and trigger the first-type task node to perform Calculation to identify whether the target image includes a face image;提取模块,用于当所获取的目标图像中包括人脸图像时,通过所述第一类任务节点从所述目标图像中提取所述人脸图像;以及An extraction module, configured to extract the face image from the target image through the first-type task node when the acquired target image includes a face image; and返回模块,用于向所述主节点返回所述提取模块所提取的所述人脸图像;A return module, configured to return the face image extracted by the extraction module to the master node;所述分配模块还用于通过所述主节点将所述人脸图像分配至每一第二类任务节点对应的存储空间中,触发所有所述第二类任务节点对所存储的人脸图像的所有人脸属性进行并行计算,从而获得所有人脸属性的计算结果;以及The allocating module is further configured to allocate the face image to the storage space corresponding to each second-type task node through the master node, and trigger all the second-type task nodes to store the stored face image All face attributes are calculated in parallel to obtain the calculation result of all face attributes; and所述返回模块还用于向所述主节点返回每一第二类任务节点的人脸属性的计算结果,触发所述主节点将所有人脸属性的计算结果进行汇总。The returning module is further used to return the calculation result of the face attributes of each task node of the second type to the master node, and trigger the master node to summarize the calculation results of the face attributes of all people.
- 一种计算机装置,其特征在于,包括处理器和存储器,所述存储器中存储至少一个计算机可读指令,所述处理器执行所述计算机可读指令以实现以下步骤:A computer device, characterized by comprising a processor and a memory, wherein the memory stores at least one computer-readable instruction, and the processor executes the computer-readable instruction to implement the following steps:构建一并行计算模型,所述并行计算模型包括一主节点以及与所述主节点连接的多个任务节点,为每一任务节点分配对应的存储空间,其中,所有所述任务节点被划分为一个第一类任务节点以及至少两个第二类任务节点,每一第二类任务节点对应一人脸属性;Build a parallel computing model, the parallel computing model includes a master node and a plurality of task nodes connected to the master node, each task node is allocated a corresponding storage space, wherein all the task nodes are divided into one A first type task node and at least two second type task nodes, each second type task node corresponds to a face attribute;通过所述主节点获取一目标图像,并将所述目标图像分配至所述第一类任务节点对应的存储空间中,触发所述第一类任务节点对所述目标图像进行计算,以识别所述目标图像中是否包括人脸图像;Acquiring a target image through the master node, and allocating the target image to the storage space corresponding to the first-type task node, triggering the first-type task node to calculate the target image to identify the target image Whether the target image includes a face image;当所获取的目标图像中包括人脸图像时,通过所述第一类任务节点从所述目标图像中提取所述人脸图像,并向所述主节点返回所提取的所述人脸图像;When the acquired target image includes a face image, extract the face image from the target image through the first-type task node, and return the extracted face image to the master node;通过所述主节点将所述人脸图像分配至每一第二类任务节点对应的存储空间中,触发所有所述第二类任务节点对所存储的人脸图像的所有人脸属性进行并行计算,从而获得所有人脸属性的计算结果;以及Assigning the face image to the storage space corresponding to each second-type task node through the master node, triggering all the second-type task nodes to perform parallel calculation on all face attributes of the stored face images To obtain the calculation result of all face attributes; and向所述主节点返回每一第二类任务节点的人脸属性的计算结果,触发所述主节点将所有人脸属性的计算结果进行汇总。Returning the calculation result of the face attributes of each task node of the second type to the master node, triggering the master node to summarize the calculation results of the face attributes of all people.
- 如权利要求9所述的计算机装置,其特征在于,所述处理器执行所述计算机可读指令时还用以实现以下步骤:The computer device of claim 9, wherein the processor is further configured to implement the following steps when executing the computer-readable instructions:输出所述主节点的汇总结果。The summary result of the master node is output.
- 如权利要求9所述的计算机装置,其特征在于,所述处理器在识别所述目标图像中是否包括人脸图像时,执行所述计算机可读指令以实现以下步骤:The computer device of claim 9, wherein the processor executes the computer-readable instructions to realize the following steps when recognizing whether the target image includes a face image.触发所述第一类任务节点扫描所述目标图像中是否包含人脸的预设局部特征的信息;以及Trigger the first type of task node to scan whether the target image contains information of preset local features of a human face; and当所述图像中包含人脸的预设局部特征的信息时,判断所述目标图像中包括人脸图像。When the image contains information on preset local features of a human face, it is determined that the target image includes a human face image.
- 如权利要求9所述的计算机装置,其特征在于,所述处理器在通过所述第一类任务节点从所述目标图像中提取所述人脸图像时,执行所述计算机可读指令以实现以下步骤:The computer device according to claim 9, wherein the processor executes the computer-readable instructions to extract the face image from the target image through the task node of the first type to implement The following steps:通过所述第一类任务节点判断所述目标图像中包括的人脸图像的数量;Judging the number of face images included in the target image through the first type of task node;当所述目标图像中包括的人脸图像的数量仅为一个时,通过所述第一类任务节点划定围绕所述人脸图像矩形边界,并根据所述矩形边界提取所述人脸图像;When the number of face images included in the target image is only one, the first type of task node delimits a rectangular boundary around the face image, and extracts the face image according to the rectangular boundary;当所述目标图像中包括的人脸图像的数量为至少两个时,通过所述第一类任务节点分别划定围绕每一人脸图像的矩形边界,计算每一矩形边界所界定的人脸图像的面积,并选择面积最大的其中一矩形边界提取所述人脸图像;以及When the number of face images included in the target image is at least two, the rectangular boundaries surrounding each face image are respectively delineated by the first-type task nodes, and the face images defined by each rectangular boundary are calculated Area, and select one of the largest rectangular boundaries to extract the face image; and通过所述第一类任务节点将所提取的人脸图像进行几何特性的归一化处理,从而使所述人脸图像归一化到相同的位置、角度和大小。The first-type task node normalizes the geometric characteristics of the extracted face image, so that the face image is normalized to the same position, angle and size.
- 如权利要求9所述的计算机装置,其特征在于,所述处理器在向所述主节点返回每一第二类任务节点的人脸属性的计算结果时,执行所述计算机可读指令以实现以下步骤:The computer device according to claim 9, wherein the processor executes the computer-readable instructions to return to the master node the calculation result of the face attribute of each task node of the second type The following steps:通过所述主节点轮询每一第二类任务节点,并向每一所述第二类任务节点发送一轮询请求报文,使得当所述第二类任务节点接收到所述轮询请求报文且至少一第二类任务节点对所述人脸属性计算完毕时,向所述主节点返回 所述第二类任务节点的计算结果;以及Polling each task node of the second type through the master node, and sending a polling request message to each task node of the second type, so that when the task node of the second type receives the polling request Message and when at least one task node of the second type has calculated the face attribute, return the calculation result of the task node of the second type to the master node; and通过所述主节点定时向未返回计算结果的其它第二类任务节点继续发送所述轮询请求报文,使得所述其它第二类任务节点在对所述人脸属性计算完毕时,向所述主节点返回所述第二类任务节点的计算结果。The master node periodically sends the polling request message to the other second-type task nodes that have not returned the calculation result, so that the other second-type task nodes, after completing the calculation of the face attributes, The master node returns the calculation result of the task node of the second type.
- 如权利要求9所述的计算机装置,其特征在于,所述人脸属性包括性别、年龄、种族以及表情。The computer device according to claim 9, wherein the face attributes include gender, age, race, and expression.
- 一种非易失性可读存储介质,其特征在于,所述非易失性可读存储介质上存储至少一个计算机可读指令,所述计算机可读指令被处理器执行时实现以下步骤:A non-volatile readable storage medium is characterized in that at least one computer-readable instruction is stored on the non-volatile readable storage medium, and when the computer-readable instruction is executed by a processor, the following steps are implemented:构建一并行计算模型,所述并行计算模型包括一主节点以及与所述主节点连接的多个任务节点,为每一任务节点分配对应的存储空间,其中,所有所述任务节点被划分为一个第一类任务节点以及至少两个第二类任务节点,每一第二类任务节点对应一人脸属性;Build a parallel computing model, the parallel computing model includes a master node and a plurality of task nodes connected to the master node, each task node is allocated a corresponding storage space, wherein all the task nodes are divided into one A first type task node and at least two second type task nodes, each second type task node corresponds to a face attribute;通过所述主节点获取一目标图像,并将所述目标图像分配至所述第一类任务节点对应的存储空间中,触发所述第一类任务节点对所述目标图像进行计算,以识别所述目标图像中是否包括人脸图像;Acquiring a target image through the master node, and allocating the target image to the storage space corresponding to the first-type task node, triggering the first-type task node to calculate the target image to identify the target image Whether the target image includes a face image;当所获取的目标图像中包括人脸图像时,通过所述第一类任务节点从所述目标图像中提取所述人脸图像,并向所述主节点返回所提取的所述人脸图像;When the acquired target image includes a face image, extract the face image from the target image through the first-type task node, and return the extracted face image to the master node;通过所述主节点将所述人脸图像分配至每一第二类任务节点对应的存储空间中,触发所有所述第二类任务节点对所存储的人脸图像的所有人脸属性进行并行计算,从而获得所有人脸属性的计算结果;以及Assigning the face image to the storage space corresponding to each second-type task node through the master node, triggering all the second-type task nodes to perform parallel calculation on all face attributes of the stored face images To obtain the calculation result of all face attributes; and向所述主节点返回每一第二类任务节点的人脸属性的计算结果,触发所述主节点将所有人脸属性的计算结果进行汇总。Returning the calculation result of the face attributes of each task node of the second type to the master node, triggering the master node to summarize the calculation results of the face attributes of all people.
- 如权利要求15所述的存储介质,其特征在于,所述计算机可读指令被所述处理器执行时还用以实现以下步骤:The storage medium according to claim 15, wherein the computer-readable instructions are further used to implement the following steps when executed by the processor:输出所述主节点的汇总结果。The summary result of the master node is output.
- 如权利要求15所述的存储介质,其特征在于,在识别所述目标图像中是否包括人脸图像时,所述计算机可读指令被所述处理器执行以实现以下步骤:The storage medium of claim 15, wherein when recognizing whether the target image includes a face image, the computer-readable instructions are executed by the processor to implement the following steps:触发所述第一类任务节点扫描所述目标图像中是否包含人脸的预设局部特征的信息;以及Trigger the first type of task node to scan whether the target image contains information of preset local features of a human face; and当所述图像中包含人脸的预设局部特征的信息时,判断所述目标图像中包括人脸图像。When the image contains information on preset local features of a human face, it is determined that the target image includes a human face image.
- 如权利要求15所述的存储介质,其特征在于,在通过所述第一类任务节点从所述目标图像中提取所述人脸图像时,所述计算机可读指令被所述处理器执行以实现以下步骤:The storage medium of claim 15, wherein the computer readable instructions are executed by the processor when the face image is extracted from the target image by the first type of task node Implement the following steps:通过所述第一类任务节点判断所述目标图像中包括的人脸图像的数量;Judging the number of face images included in the target image through the first type of task node;当所述目标图像中包括的人脸图像的数量仅为一个时,通过所述第一类任务节点划定围绕所述人脸图像矩形边界,并根据所述矩形边界提取所述人 脸图像;When the number of face images included in the target image is only one, a rectangular boundary surrounding the face image is delineated by the first-type task node, and the face image is extracted according to the rectangular boundary;当所述目标图像中包括的人脸图像的数量为至少两个时,通过所述第一类任务节点分别划定围绕每一人脸图像的矩形边界,计算每一矩形边界所界定的人脸图像的面积,并选择面积最大的其中一矩形边界提取所述人脸图像;以及When the number of face images included in the target image is at least two, the rectangular boundaries surrounding each face image are respectively delineated by the first-type task nodes, and the face images defined by each rectangular boundary are calculated Area, and select one of the largest rectangular boundaries to extract the face image; and通过所述第一类任务节点将所提取的人脸图像进行几何特性的归一化处理,从而使所述人脸图像归一化到相同的位置、角度和大小。The first-type task node normalizes the geometric characteristics of the extracted face image, so that the face image is normalized to the same position, angle and size.
- 如权利要求15所述的存储介质,其特征在于,在向所述主节点返回每一第二类任务节点的人脸属性的计算结果时,所述计算机可读指令被所述处理器执行以实现以下步骤:The storage medium according to claim 15, wherein the computer readable instructions are executed by the processor when returning the calculation result of the face attribute of each task node of the second type to the master node Implement the following steps:通过所述主节点轮询每一第二类任务节点,并向每一所述第二类任务节点发送一轮询请求报文,使得当所述第二类任务节点接收到所述轮询请求报文且至少一第二类任务节点对所述人脸属性计算完毕时,向所述主节点返回所述第二类任务节点的计算结果;以及Polling each task node of the second type through the master node, and sending a polling request message to each task node of the second type, so that when the task node of the second type receives the polling request Message and when at least one task node of the second type has calculated the face attribute, return the calculation result of the task node of the second type to the master node; and通过所述主节点定时向未返回计算结果的其它第二类任务节点继续发送所述轮询请求报文,使得所述其它第二类任务节点在对所述人脸属性计算完毕时,向所述主节点返回所述第二类任务节点的计算结果。The master node periodically sends the polling request message to the other second-type task nodes that have not returned the calculation result, so that the other second-type task nodes, after completing the calculation of the face attributes, The master node returns the calculation result of the task node of the second type.
- 如权利要求15所述的存储介质,其特征在于,所述人脸属性包括性别、年龄、种族以及表情。The storage medium of claim 15, wherein the face attributes include gender, age, race, and expression.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811281100.0 | 2018-10-30 | ||
CN201811281100.0A CN109522824A (en) | 2018-10-30 | 2018-10-30 | Face character recognition methods, device, computer installation and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020087922A1 true WO2020087922A1 (en) | 2020-05-07 |
Family
ID=65772696
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/089662 WO2020087922A1 (en) | 2018-10-30 | 2019-05-31 | Facial attribute identification method, device, computer device and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109522824A (en) |
WO (1) | WO2020087922A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111932266A (en) * | 2020-07-24 | 2020-11-13 | 深圳市富途网络科技有限公司 | Information processing method, information processing device, electronic equipment and storage medium |
CN112135092A (en) * | 2020-09-03 | 2020-12-25 | 杭州海康威视数字技术股份有限公司 | Image processing method |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109522824A (en) * | 2018-10-30 | 2019-03-26 | 平安科技(深圳)有限公司 | Face character recognition methods, device, computer installation and storage medium |
CN110866466B (en) * | 2019-10-30 | 2023-12-26 | 平安科技(深圳)有限公司 | Face recognition method, device, storage medium and server |
CN113128297A (en) * | 2019-12-31 | 2021-07-16 | 深圳云天励飞技术有限公司 | Equipment docking method and device, electronic equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100008550A1 (en) * | 2008-07-14 | 2010-01-14 | Lockheed Martin Corporation | Method and apparatus for facial identification |
CN103310460A (en) * | 2013-06-24 | 2013-09-18 | 安科智慧城市技术(中国)有限公司 | Image characteristic extraction method and system |
CN107862242A (en) * | 2017-09-19 | 2018-03-30 | 汉柏科技有限公司 | A kind of magnanimity face characteristic comparison method and system based on Map Reduce |
CN109522824A (en) * | 2018-10-30 | 2019-03-26 | 平安科技(深圳)有限公司 | Face character recognition methods, device, computer installation and storage medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101904192B1 (en) * | 2016-05-30 | 2018-10-05 | 한국과학기술원 | User -independent Face Landmark Detection and Tracking Apparatus for Spatial Augmented Reality Interaction |
CN106529402B (en) * | 2016-09-27 | 2019-05-28 | 中国科学院自动化研究所 | The face character analysis method of convolutional neural networks based on multi-task learning |
CN107844781A (en) * | 2017-11-28 | 2018-03-27 | 腾讯科技(深圳)有限公司 | Face character recognition methods and device, electronic equipment and storage medium |
-
2018
- 2018-10-30 CN CN201811281100.0A patent/CN109522824A/en active Pending
-
2019
- 2019-05-31 WO PCT/CN2019/089662 patent/WO2020087922A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100008550A1 (en) * | 2008-07-14 | 2010-01-14 | Lockheed Martin Corporation | Method and apparatus for facial identification |
CN103310460A (en) * | 2013-06-24 | 2013-09-18 | 安科智慧城市技术(中国)有限公司 | Image characteristic extraction method and system |
CN107862242A (en) * | 2017-09-19 | 2018-03-30 | 汉柏科技有限公司 | A kind of magnanimity face characteristic comparison method and system based on Map Reduce |
CN109522824A (en) * | 2018-10-30 | 2019-03-26 | 平安科技(深圳)有限公司 | Face character recognition methods, device, computer installation and storage medium |
Non-Patent Citations (1)
Title |
---|
NANJINGDREAMFLY: "Face Attribute Recognition Algorithm | Gender+Race+Age +Expression", 13 March 2017 (2017-03-13), pages 1 - 6, XP055700189, Retrieved from the Internet <URL:https://blog.csdn.net/nanjingdreamfly/article/details/61923258> * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111932266A (en) * | 2020-07-24 | 2020-11-13 | 深圳市富途网络科技有限公司 | Information processing method, information processing device, electronic equipment and storage medium |
CN111932266B (en) * | 2020-07-24 | 2023-11-17 | 深圳市富途网络科技有限公司 | Information processing method, information processing device, electronic equipment and storage medium |
US12118823B2 (en) | 2020-07-24 | 2024-10-15 | Shenzhen Futu Network Technology Co., Ltd | Information processing method and apparatus, electronic device, and storage medium |
CN112135092A (en) * | 2020-09-03 | 2020-12-25 | 杭州海康威视数字技术股份有限公司 | Image processing method |
CN112135092B (en) * | 2020-09-03 | 2023-05-26 | 杭州海康威视数字技术股份有限公司 | Image processing method |
Also Published As
Publication number | Publication date |
---|---|
CN109522824A (en) | 2019-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020087922A1 (en) | Facial attribute identification method, device, computer device and storage medium | |
JP6926339B2 (en) | Image clustering methods and devices, electronic devices and storage media | |
US10713532B2 (en) | Image recognition method and apparatus | |
JP7229174B2 (en) | Person identification system and method | |
WO2019218824A1 (en) | Method for acquiring motion track and device thereof, storage medium, and terminal | |
CN110728255B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
US11238272B2 (en) | Method and apparatus for detecting face image | |
US8326001B2 (en) | Low threshold face recognition | |
Marciniak et al. | Influence of low resolution of images on reliability of face detection and recognition | |
US11163978B2 (en) | Method and device for face image processing, storage medium, and electronic device | |
CN110163111B (en) | Face recognition-based number calling method and device, electronic equipment and storage medium | |
JP2020515983A (en) | Target person search method and device, device, program product and medium | |
CN105654039B (en) | The method and apparatus of image procossing | |
WO2019033569A1 (en) | Eyeball movement analysis method, device and storage medium | |
WO2012154369A1 (en) | Scaling of visual content based upon user proximity | |
CN111008935B (en) | Face image enhancement method, device, system and storage medium | |
CN111814620A (en) | Face image quality evaluation model establishing method, optimization method, medium and device | |
WO2019095997A1 (en) | Image recognition method and device, computer device and computer-readable storage medium | |
CN110689046A (en) | Image recognition method, image recognition device, computer device, and storage medium | |
WO2020172870A1 (en) | Method and apparatus for determining motion trajectory of target object | |
Abate et al. | Kurtosis and skewness at pixel level as input for SOM networks to iris recognition on mobile devices | |
WO2019095998A1 (en) | Image recognition method and device, computer device and computer-readable storage medium | |
CN111680664A (en) | Face image age identification method, device and equipment | |
CN110348272B (en) | Dynamic face recognition method, device, system and medium | |
Srivastava et al. | Automated emergency paramedical response system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19878774 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 25.08.2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19878774 Country of ref document: EP Kind code of ref document: A1 |