Detailed Description
Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The embodiment of the disclosure provides a 3D face information processing method, which can realize 3D face aesthetic analysis.
Technical solutions of embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a 3D face information processing method according to an embodiment of the present disclosure.
Referring to fig. 1, a method provided by an embodiment of the present disclosure includes:
in step 101, an image of the face of the user is acquired.
In this step, a face image that the user shot or directly uploaded through the camera may be acquired, and the face image generally includes a complete front face five sense organ region, a side face cheek region, ears, and the like.
In step 102, a 3D model of the human face is generated from the acquired face image.
In this step, after the face image of the user is acquired, a 3D face model can be generated by using a relevant 3D face recognition algorithm according to the acquired face image. The related 3D face recognition algorithm may be, for example, an algorithm based on image features, an algorithm based on model variable parameters, or an algorithm based on deep learning, and the disclosure is not limited thereto.
In the step, when the face image is a face image shot by a 3D camera, a first 3D model of the face is generated according to the obtained face image; or when the face image is the face image shot by the 2D camera, generating a second 3D model of the human face according to the obtained face image.
In step 103, a face point cloud picture is generated and displayed according to the key feature points in the face 3D model.
In the step, key feature points in the face 3D model can be obtained; and performing connection processing on the key feature points to generate and display a face point cloud picture. For example, a triangle or a square distribution with a preset rule can be formed by connecting lines, so as to generate a human face cloud point image. The key feature points in the face 3D model may include, but are not limited to, key feature points such as eye, pupil, nose tip, mouth corner point, ear, eyebrow, and contour points of each part of the face.
The point cloud data (point cloud) refers to the scanned data recorded in the form of points, each point includes three-dimensional coordinates, and some points may include color information (RGB) or reflection Intensity information (Intensity).
In this step, the face cloud images can be switched and displayed according to different set parts of the face, for example, the face cloud images of the whole face, the eyes, the mouth, the nose and other parts can be switched and displayed respectively, and the switching and displaying sequence can be flexibly set according to the requirement.
In step 104, performing aesthetic analysis comparison on the feature values of the key feature points in the face 3D model and preset aesthetic feature values to obtain an aesthetic analysis result.
In the step, the characteristic value of a key characteristic point in the face 3D model can be determined; and comparing the characteristic value of the determined key characteristic point with a preset aesthetic characteristic value to obtain an aesthetic analysis result which can be used for displaying in different graphs.
It can be found that the scheme provided by the embodiment of the present disclosure is to generate a 3D model of a human face according to an acquired face image; generating and displaying a face point cloud picture according to key feature points in the face 3D model; and performing aesthetic analysis comparison on the characteristic value of the key characteristic point in the human face 3D model and a preset aesthetic characteristic value to obtain an aesthetic analysis result. By means of the processing in the mode, the face 3D model can be generated firstly, then the key feature points in the face 3D model are used for generating the face point cloud picture, the feature values of the key feature points in the face 3D model are subjected to aesthetic analysis and compared with the preset aesthetic feature values, and an aesthetic analysis result is obtained, so that 3D face aesthetic analysis is realized, the face aesthetic analysis result can be displayed in a 3D form in a follow-up mode, the face aesthetic analysis has a 3D stereoscopic impression, and is more vivid and accurate, and the use experience of a user is improved.
Fig. 2 is another schematic flow diagram of a 3D face information processing method according to an embodiment of the present disclosure. Fig. 2 adds the steps of waiting for an interface and displaying the results of the aesthetic analysis to fig. 1.
Referring to fig. 2, a method provided by an embodiment of the present disclosure includes:
in step 201, an image of the user's face is acquired.
This step may be as described with reference to step 101.
In step 202, a display waiting interface is loaded.
In the step, in the process of generating the face 3D model and before the face cloud image is not displayed, a display waiting interface may be loaded, for example, a preset starry sky 3D model may be loaded and displayed as a waiting interface. The waiting interface can be displayed to enable the user to feel that the waiting process is not boring, and the technological sense is increased. It should be noted that the waiting interface may also be in the form of various animations, music, and the like, and the embodiment of the disclosure is not limited.
In step 203, a 3D model of the human face is generated from the acquired face image.
This step may be as described with reference to step 102.
In step 204, a face point cloud image is generated and displayed according to the key feature points in the face 3D model.
This step may be as described with reference to step 103.
In step 205, the feature values of the key feature points in the face 3D model are compared with the preset aesthetic feature values for aesthetic analysis, and an aesthetic analysis result is obtained.
This step may be as described with reference to step 104.
In step 206, the results of the aesthetic analysis are displayed.
According to the scheme of the embodiment of the disclosure, the aesthetic analysis result can be displayed on different interfaces. For example, the aesthetic analysis results are displayed on the face point cloud picture and/or the first 3D model of the face; or, displaying an aesthetic analysis result on the face point cloud picture and/or the second 3D model of the face; or displaying an aesthetic analysis result on the face cloud point image and/or a third 3D model of the face, wherein the third 3D model of the face is obtained by overlapping the face image shot by the 2D camera and the face cloud point image.
That is to say, in a 3D camera scene, the first aesthetic analysis result may be displayed on the face cloud point image, and then the second aesthetic analysis result may be displayed on the face first 3D model displayed subsequently, or the second aesthetic analysis result may be displayed only on the face first 3D model displayed subsequently, or the first aesthetic analysis result may be displayed only on the face cloud point image, and may be flexibly set as needed.
In the scene of the 2D camera, the first aesthetic analysis result may be displayed on the face cloud point image, and then the second aesthetic analysis result may be displayed on the second 3D model of the face displayed subsequently, or the second aesthetic analysis result may be displayed only on the second 3D model of the face displayed subsequently, or the first aesthetic analysis result may be displayed only on the face cloud point image, and may be flexibly set as needed.
In the scene of the 2D camera, the first aesthetic analysis result may be displayed on the face cloud point image, and then the second aesthetic analysis result may be displayed on the face third 3D model displayed subsequently, or the second aesthetic analysis result may be displayed only on the face third 3D model displayed subsequently, or the first aesthetic analysis result may be displayed only on the face cloud point image, and may be flexibly set as needed.
It should be noted that the second aesthetic analysis result may also be displayed on other interfaces, for example, on the face image of the user or other selected images.
It is further noted that embodiments of the present disclosure may also generate reports containing results of aesthetic analysis. The generated report can be a full screen report page, a long report page or a pictorial report page, and the aesthetic analysis results in the report can comprise the first aesthetic analysis result and the second aesthetic analysis result or only comprise the second aesthetic analysis result. The generated report may further include the first 3D model of the face, the second 3D model of the face, or the third 3D model of the face.
In addition, in the scheme of the embodiment of the present disclosure, the display of the result of the aesthetic analysis and the generation of the report including the result of the aesthetic analysis may be included, or only the result of the aesthetic analysis and not the generation of the report including the result of the aesthetic analysis may be included, or only the report including the result of the aesthetic analysis and not the display of the result of the aesthetic analysis may be generated, and the configuration may be flexible as required.
It can be found that according to the scheme provided by the embodiment of the disclosure, in the process of generating the face 3D model and before the face cloud picture is not displayed, the display waiting interface can be loaded, for example, the preset star field 3D model can be loaded and displayed as the waiting interface, so that the user does not feel boring to wait, and the technological sense is increased. In addition, the aesthetic analysis result can be displayed through different interfaces, and the method is more suitable for various scenes.
Fig. 3 is another schematic flow chart of a 3D face information processing method according to an embodiment of the present disclosure. Fig. 3 describes aspects of an embodiment of the present disclosure in more detail with respect to fig. 1 and 2.
Referring to fig. 3, a method provided by an embodiment of the present disclosure includes:
in step 301, an image of the user's face is acquired.
In this step, a face image that the user shot or directly uploaded through the camera may be acquired, and the face image generally includes a complete front face five sense organ region, a side face cheek region, ears, and the like.
Wherein, the shooting through the camera can be the shooting of video through the camera or the shooting of photos through the camera. Referring to fig. 4, taking a shot video as an example, the embodiment of the present disclosure may acquire a face image of a user by shooting the video. In the shooting process, guide information for prompting a user to turn around can be generated so that the user can complete video shooting according to the guide information, and the front and the side of the face are generally required to be shot in the shooting process. After the shooting is finished, a front picture and two side pictures can be taken from the shot video, and the side pictures can be pictures with set angles of 55 degrees or 45 degrees on the left side and the right side of the face.
The camera in this step may be a 3D camera or a 2D camera, and a user that can be photographed by the 3D camera is referred to as a 3D user, and a user that can be photographed by the 2D camera is referred to as a 2D user. The three-dimensional space coordinates of each point position in the space in the visual field can be collected through the 3D camera, and three-dimensional imaging can be obtained through restoration of an algorithm; while a 2D camera typically acquires two-dimensional spatial coordinates, i.e., (x, y) coordinates, of each point in the image.
When taking a facial image, the user may generally be instructed to meet the following requirements as much as possible, for example: the front face shooting keeps the face centered, the side face shooting angle is generally 40-80 degrees, the shooting picture keeps clear, glasses are not worn as far as possible, the face is not shielded as far as possible, and the hair of a woman is preferably pricked to avoid shielding the face.
The method of the embodiment of the disclosure can be applied to a mobile terminal device or other camera devices, for example, a face image is shot through a camera on the mobile terminal device.
In step 302, a display waiting interface is loaded.
In the process of generating the face 3D model and before the face cloud map is not displayed, the display waiting interface, for example, the preset starry sky 3D model may be loaded and displayed as the waiting interface. It should be noted that the waiting interface may also be in the form of various animations, music, and the like, and the embodiment of the disclosure is not limited. Referring to fig. 5, for example, but not limited to, displaying the preset starry sky 3D model in the waiting interface, the five sense organs of the head portrait may be gradually highlighted, and finally, a face 3D model containing the outline of the clearer five sense organs is formed, and simple aesthetic analysis and evaluation of the five sense organs of the face 3D model may also occur, such as aesthetic analysis and evaluation of "big eyes, high nose bridge, short chin, santing meditation, golden triangle 45" and the like. Because the generation of the human face 3D model and the calculation of key feature points and the like require time, the user can feel that the waiting process is not tedious and the technological sense is increased by displaying a waiting interface, such as displaying a preset star space 3D model.
In the waiting process of displaying the waiting interface of the preset starry sky 3D model, the progress state may also be displayed by loading the progress bar, for example, dots of the progress bar are highlighted and advance slowly, and the user waits for a long time at a set progress position, for example, 80%, until the model is taken and an image is returned.
In step 303, a 3D model of the face is generated from the acquired face image.
In this step, after the face image of the user is acquired, a 3D face model can be generated by using a relevant 3D face recognition algorithm according to the acquired face image. And (3) a related 3D face recognition algorithm. For example, it may be an algorithm based on image features, an algorithm based on model variable parameters, or an algorithm based on deep learning, etc., and the disclosure is not limited thereto.
It should be noted that, no matter the acquired face image of the 2D user or the 3D user, the face 3D model may be generated by using a related face recognition algorithm according to the acquired face image.
In the step, when the face image is a face image shot by a 3D camera, a first 3D model of the face is generated according to the obtained face image; or when the face image is the face image shot by the 2D camera, generating a second 3D model of the human face according to the obtained face image.
In step 304, a face point cloud image is generated and displayed according to the key feature points in the face 3D model.
In the step, key feature points in the face 3D model can be obtained; and performing connection processing on the key feature points to generate and display a face point cloud picture. For example, a triangle or a square distribution with a preset rule can be formed by connecting lines, so as to generate a human face cloud point image. The key feature points in the 3D model of the human face may include, but are not limited to, key feature points such as eye, pupil, nose, mouth, ear, eyebrow, and contour points of each part of the human face.
The point cloud (point cloud) refers to the scanned data recorded in the form of points, each point including three-dimensional coordinates, some of which may include color information (RGB) or reflection Intensity information (Intensity).
The point cloud images of the embodiment of the disclosure can be generated through the face 3D models, each face 3D model has many relevant feature points of the model, key feature points of the face 3D model can be extracted, and the key feature points are connected to form regular triangular or square distribution, different point cloud images of each face 3D model can be generated, that is, different face 3D models generate different point cloud images, and referring to fig. 6 to 7, an interface schematic diagram for generating the point cloud images is displayed.
For example, key feature points of the eyes, the eyebrows, the mouth and the like can be encrypted and connected, so that the point cloud picture looks closer to the face. Compare 2D face analysis, 3D point cloud picture can be more meticulous analysis to user's facial feature, and more directly perceived and three-dimensional show face detail feature, the figure is more pleasing to the eye, and science and technology feels strong for the user accepts more easily.
When the face image is the face image shot through the 2D camera, the converted 3D model is obtained according to the second 3D model of the face, the mapping matrix between the key feature point coordinates of the face image and the key feature point coordinates in the converted 3D model is determined according to the set algorithm, and the face point cloud image is generated and displayed according to the mapping matrix.
That is, if the user is a 2D user, because the 2D face image of the user is stored in the shooting process in the foregoing steps, and the 3D face model of the user is generated according to the image shot by the user, the 3D face model has model point locations of key feature points, a design model similar to the 3D face model can be determined, the similar design model is taken as a target, the key feature points on the 3D face model of the corresponding user are reserved, and a unified and beautiful converted 3D model is obtained. Then, a mapping matrix between 2D key feature point coordinates on the 2D face image and 3D key feature point coordinates corresponding to the obtained conversion 3D model is searched by using a PnP (spectral-N-point) algorithm, a point line is rendered in a window with the size of the 2D face image according to the mapping matrix, and positions of eyebrows, eyes, mouths and the like are subjected to line drawing processing to generate and obtain a cloud point image of the 2D user.
It should be noted that, the above-mentioned conversion of the 3D model may not be needed, and when the face image is a face image shot by a 2D camera, a mapping matrix between the key feature point coordinates of the face image and the key feature point coordinates in the second 3D model of the face is determined according to a set algorithm, and a face point cloud image is generated and displayed according to the mapping matrix. That is to say, if the user is a 2D user, because the 2D face image of the user is saved in the shooting process in the foregoing steps, a 3D face model of the user is generated according to the image shot by the user, the 3D face model has model point locations of key feature points, a PnP algorithm is used to find a mapping matrix between coordinates of the 2D key feature points on the 2D face image and the key feature points in the 3D face model, and a point line is rendered in a window of the size of the 2D face image according to the mapping matrix, and then line drawing is performed on the eyebrow, eye, mouth and other parts, so as to generate and obtain a 2D user's cloud point image.
In step 305, the feature values of the key feature points in the face 3D model are compared with preset aesthetic feature values for aesthetic analysis, and the first aesthetic analysis result is displayed on the face point cloud graph.
It should be noted that, this step is exemplified by, but not limited to, displaying the first aesthetic analysis result on the face point cloud image, and the first aesthetic analysis result may not be displayed on the face point cloud image.
In the step, the characteristic value of a key characteristic point in the face 3D model can be determined; and performing aesthetic analysis comparison on the determined characteristic value of the key characteristic point and a preset aesthetic characteristic value to obtain a first aesthetic analysis result.
In this step, the face cloud image may be switched and displayed according to different set portions of the face, the first aesthetic analysis results of the different set portions of the face are correspondingly displayed on the face cloud image, for example, the front full face, the eyes, the mouth, the nose, the side face of the face, and the like are respectively displayed, and the switching and displaying sequence may be flexibly set as required.
The aesthetic analysis of the face of the disclosed embodiments may include full-face and partial aesthetic analysis of the face, e.g., the front face of a human face cloud may exhibit full-face, eye, nose, mouth analysis; the side of the human face cloud point picture can show a gold triangle line on the side, a straight line on the side of the forehead plumpness and the like.
Wherein, the marked line (also marked reference line) and the characteristic value of the key characteristic point can be displayed on the point cloud picture. The key analysis area of the whole face or a certain part can be circled by marking and drawing lines.
The feature values of the key feature points can be calculated through a set algorithm, and the obtained face images of the user are different, and the feature values of the key feature points are different. The feature value may be a size value, a scale value, a depth value, an angle value, or the like of the key feature point of the face feature. The embodiments of the present disclosure may preset various aesthetic feature values for different types of analysis settings, for example, about 50-70 facial features may be set, the features may be divided into a plurality of grades, for example, three grades, each grade may have a feature score, and the sum of all feature scores is the final score, and the feature values may only show the features with obvious user advantages, but are not limited thereto. For each characteristic value, calculation can be realized through a related setting algorithm, and finally conclusions such as the size of eyes, the height of nose bridge and the like can be obtained.
For example, the feature value may be a composite color value, where the composite color value may be the addition of a face front value and a side face value. The characteristic value can also be the range value of a court and an atrium, and the size values of lip line proportion, nose width, eye distance and eye length, or the values of eyebrow length, eyebrow distance, lip trend and the like.
For example, referring to fig. 8, in a point cloud plot for a full-face analysis, the first aesthetic analysis result may show: "three divisions: the Shangting is in an ideal range of 5cm, the Zhongting is in an ideal range of 1.2cm, the proportion of lip lines is ideal of 1.5 ' and the like, in addition, descriptions of ' intelligence sense 21, age sense 56, full lips, heart-shaped faces ' and the like can be displayed, and the numerical values of the nose width, the inter-ocular distance, the eye length and the like are marked by marking lines in a point cloud picture. Detailed analysis of the eye, nose and mouth can be seen in the interface diagrams of fig. 9-11, respectively.
In an embodiment of the present invention, the 3D aesthetic analysis includes a full-face 3D aesthetic analysis and a partial 3D aesthetic analysis of the face, the full-face 3D aesthetic analysis may include a color value analysis, a quality analysis, a character analysis, a face dominant part and improvement part analysis, and the like, and the partial 3D aesthetic analysis may include a face shape analysis, an eyebrow shape analysis, an eye shape analysis, a lip shape analysis, and the like. The range of the full-face aesthetic analysis and the partial aesthetic analysis is not limited to the above categories, and may be other categories.
In step 306, the second aesthetic analysis result is displayed on a different interface.
The step can display a first 3D model of the face, and display a second aesthetic analysis result on the first 3D model of the face; or displaying a second 3D model of the face, and displaying a second aesthetic analysis result on the second 3D model of the face; or displaying a third 3D model of the face, and displaying the second aesthetic analysis result on the third 3D model of the face.
The first 3D model of the human face is a true 3D model generated according to the acquired face image when the face image is the face image shot by the 3D camera; the third 3D model of the human face is a false 3D model obtained by superposing the face image shot by the 2D camera and the human face point cloud image, and the second 3D model of the human face is a 3D model generated according to the obtained face image when the face image is the face image shot by the 2D camera.
In the step, the generated first 3D model of the face can be displayed, lines are marked on the set part of the first 3D model of the face, data of aesthetic analysis are displayed, and text description of the aesthetic analysis is displayed in an area outside the first 3D model of the face; or displaying the generated second 3D model of the human face, marking lines and displaying data of aesthetic analysis on the set part of the second 3D model of the human face, and displaying text description of the aesthetic analysis in an area outside the second 3D model of the human face; or displaying a third 3D model of the face obtained by superposing the generated face cloud image on the face image of the user, wherein a drawing line is marked and data of aesthetic analysis is displayed on a set part of the face cloud image of the third 3D model of the face, and text description of the aesthetic analysis is displayed in an area outside the face cloud image.
The first 3D model of the human face may be a true 3D model generated by using the face image of the user, as shown in fig. 12, and the third 3D model of the human face may be a false 3D model generated by using the face image of the user, as shown in fig. 13. The false 3D model is displayed by superposing a point cloud picture on a 2D face image, and marked lines and characteristic values are displayed on the superposed false 3D model. That is to say, for the 2D user, according to the cloud point map generated for the 2D user in the foregoing steps, the cloud point map and the 2D face image of the user are processed in an overlapping manner, and the thickness and color of the point line of the cloud point map can be adjusted to generate the false 3D model. Meanwhile, for the sake of attractiveness and making the dotted line more obvious, a masking layer treatment can be added in the middle of the overlapped dot cloud picture and the 2D face image.
In step 307, according to the fact that the menu of labels of different parts of the detected face is clicked, the 3D models of different faces are switched to display the corresponding parts of the face.
In this step, the corresponding parts of the first 3D model, the second 3D model or the third 3D model of the face may be switched and displayed according to the fact that the menu of the labels of the different parts of the face is clicked.
For example, a menu of labels of different parts of the face where the first 3D model (i.e. true 3D model), the second 3D model or the third 3D model (i.e. false 3D model) of the face is detected is clicked, and a corresponding area can be skipped to show the analysis of a certain part. For example: detecting that the face label menu is clicked, and switching to face analysis; and clicking the eyebrow label menu, and switching the 3D model of the human face to eyebrow analysis.
For example, the real 3D model can be displayed in an interactive manner, that is, the user can switch different parts to view the real 3D model, and can view the real 3D model more intuitively; it should be noted that the pseudo 3D model may also be presented in an interactive manner.
In step 308, a report is generated containing the results of the aesthetic analysis.
The report generated by this step may be a full-screen report page, a long report page, or a pictorial report page, and the tab menu in the report page may include: aesthetic analysis, face, eyebrow, eye, lip, skin detection. It should be noted that other labels may be added according to the analysis result. The results of the aesthetic analysis in the generated report may include the first and second aesthetic analysis results previously described, or only the second aesthetic analysis result. The generated report may further include the first 3D model of the face, the second 3D model of the face, or the third 3D model of the face. A full screen reporting page may refer to fig. 14.
For example, the step displays the aesthetic analysis result in a long report page by regions, wherein the long report page displays tags jumping to the corresponding regions, and the analysis of a certain part can be displayed by clicking a tag menu to jump to the corresponding regions. For example: clicking a face label menu in the long report page, switching the displayed 3D model of the face to the face analysis, and simultaneously jumping the report area of the long report page to the face area; clicking an eyebrow label menu in the long report page, switching the displayed 3D model of the face to an eyebrow analysis, and simultaneously jumping the report area of the long report page to the eyebrow area.
Wherein, aesthetic analysis label menu area can show the whole face aesthetic analysis results, including: the color value is divided into data, a gas quality analysis result, a character analysis result, a face dominant part and an improvement part.
In the long report page, the result of the aesthetic analysis can be interpreted and displayed through data and text description.
And a long report page is displayed on the page of the displayed human face 3D model, and a full-face analysis result and a local analysis result are displayed in the long report page. For example: the presentation of the full-face aesthetic analysis results may include the following: "composite color value 124; the whole style of the face belongs to gentle and gentle faces, the maturity of the structure of five sense organs is high, but the structure is soft, and the primary expression has no attack degree, so that the face looks gentle and gentle. Quality of gas: the sensation of gas brought to the person is young, sensitive, but occasionally appears light. Character lattice: your character will appear perceptual, romantic and sometimes camouflaged at ordinary times. The dominant part is as follows: dripping nose, melon seed face, standard eyebrow; improving the part: the height of the chin and the nose bridge. "
In the long report page, the interpretation or description of the corresponding aesthetic analysis result is respectively displayed in the face shape, the eyebrow shape, the eye shape, the lip shape, and the skin detection area, for example, the display of the eyebrow shape analysis result may include: "your eyebrows are standard eyebrows; you are more suitable for leveling eyebrows; leveling the eyebrows is the most suitable eyebrows for you. "
Wherein, in the long report page, an intelligent beauty tool label is also displayed, and the label of the intelligent beauty tool is displayed in the corresponding analysis display area, such as: and a label of the intelligent eyebrow drawing tool is displayed in the eyebrow type analysis area, and after the label of the intelligent eyebrow drawing tool is clicked, a user can enter a page of the intelligent eyebrow drawing tool.
The long report page also comprises face-type similar star information and links, and information such as how stars improve the face can be known by clicking; and clicking a style recommendation button to jump to a style recommendation page of aesthetic diagnosis. In addition, analysis reports and shared videos can be saved and shared to the social platform. The long report page may have a video playing area for recording or playing, where the video is, for example, the recorded screen content of the 3D face analysis process, and may be directly shared with social software such as a trembler.
It should be noted that, the embodiment of the present disclosure is illustrated and not limited to displaying the result of the aesthetic analysis first, and then generating the report containing the result of the aesthetic analysis, and may also display only the result of the aesthetic analysis without generating the report containing the result of the aesthetic analysis, or generate only the report containing the result of the aesthetic analysis without displaying the result of the aesthetic analysis, and may be flexibly configured as required.
The technical scheme provided by the embodiment of the disclosure can realize 3D face aesthetic analysis, and the face aesthetic analysis result is displayed in a 3D form in a three-dimensional manner, so that the face aesthetic analysis has more 3D stereoscopic impression, is more vivid and accurate, and improves the use experience of a user; interactive operation can be realized; the report page can be generated, so that a user can simply and quickly obtain report contents, the user can check the aesthetic analysis result more conveniently and intuitively, and the requirements of the user can be met more comprehensively.
The 3D face information processing method according to the present disclosure is described in detail above, and a 3D face information processing apparatus and a terminal corresponding to the present disclosure are described below.
Fig. 15 is a schematic block diagram of a 3D face information processing apparatus according to an embodiment of the present invention.
The apparatus may be located in a terminal device, such as a mobile terminal device or a computer device. Referring to fig. 15, a 3D face information processing apparatus includes: the system comprises an acquisition module 151, a face 3D model generation module 152, a face cloud image generation module 153 and an aesthetic analysis processing module 154.
An obtaining module 151, configured to obtain a face image of the user. The obtaining module 151 may obtain a face image captured by a camera or directly uploaded by a user, where the face image generally includes a complete facial five-sense organ region, a side cheek region, ears, and the like.
A face 3D model generating module 152, configured to generate a face 3D model according to the face image acquired by the acquiring module 151. The face 3D model generation module 152 may generate a face 3D model by using a relevant 3D face recognition algorithm according to the acquired face image. The related 3D face recognition algorithm may be, for example, an algorithm based on image features, an algorithm based on model variable parameters, or an algorithm based on deep learning, and the disclosure is not limited thereto.
The face cloud image generation module 153 is configured to generate and display a face cloud image according to the key feature points in the face 3D model generated by the face 3D model generation module 152. The face cloud image generation module 153 may obtain key feature points in the face 3D model; and performing line connection processing on the key feature points to generate a face point cloud picture.
And the aesthetic analysis processing module 154 is configured to perform aesthetic analysis comparison on the feature values of the key feature points in the face 3D model generated by the face 3D model generation module 153 with preset aesthetic feature values to obtain an aesthetic analysis result. The aesthetic analysis processing module 154 may determine feature values of key feature points in the face 3D model; and performing aesthetic analysis comparison on the determined characteristic value of the key characteristic point and a preset aesthetic characteristic value to obtain an aesthetic analysis result.
It can be found that the scheme provided by the embodiment of the disclosure realizes 3D face aesthetic analysis, and the face aesthetic analysis result can be displayed in a 3D form in a subsequent three-dimensional manner, so that the face aesthetic analysis has more 3D stereoscopic impression, is more vivid and accurate, and improves the use experience of users.
Fig. 16 is another schematic block diagram of a 3D face information processing apparatus according to an embodiment of the present invention.
The apparatus may be located in a terminal device, such as a mobile terminal device or a computer device. Referring to fig. 16, a 3D face information processing apparatus includes: the system comprises an acquisition module 151, a face 3D model generation module 152, a face cloud image generation module 153, an aesthetic analysis processing module 154, an aesthetic result display module 155, an interaction module 156 and a report generation module 157.
The functions of the obtaining module 151, the face 3D model generating module 152, the face cloud image generating module 153, and the aesthetic analysis processing module 154 may refer to the description in fig. 15, and are not described herein again.
And an aesthetic result display module 155 for displaying the aesthetic analysis result analyzed and compared by the aesthetic analysis processing module 154.
The face 3D model generation module 152 may further include: a first generation module 1521 or a second generation module 1522.
A first generating module 1521, configured to generate a first 3D model of the human face according to the face image acquired by the acquiring module 151 when the face image is a face image captured by a 3D camera.
A second generating module 1522, configured to generate a second 3D model of the human face according to the face image acquired by the acquiring module 151 when the face image is a face image captured by a 2D camera.
Aesthetic result display module 155 may also include: a first display module 1551, a second display module 1552, and a third display module 1553. The aesthetic result display module 155 may display the aesthetic analysis results at different interfaces.
A first display module 1551, configured to display the aesthetic analysis result analyzed and compared by the aesthetic analysis processing module 154 on the face cloud picture and/or the first 3D model of the face.
A second display module 1552, configured to display the aesthetic analysis result analyzed and compared by the aesthetic analysis processing module 154 on the face cloud picture and/or the second 3D model of the face.
A third display module 1553, configured to display the aesthetic analysis result analyzed and compared by the aesthetic analysis processing module 154 on the face cloud image and/or a third 3D model of the face, where the third 3D model of the face is obtained by superimposing the face image shot by the 2D camera and the face cloud image.
And the interaction module 156 is configured to switch and display corresponding parts of the first 3D model of the face, the second 3D model of the face, or the third 3D model of the face according to that the tag menus of different parts of the face are clicked.
A report generation module 157 for generating a report containing the results of the aesthetic analysis. The report generated by the report generating module 157 may be a full-screen report page, a long report page, or a pictorial report page, and the tags in the report page may include: aesthetic analysis, face, eyebrow, eye, lip, skin detection. It should be noted that other labels may be added according to the analysis result.
The functions of the modules in the 3D face information processing apparatus may refer to the description in the method at the same time, and are not described herein again.
Fig. 17 is a schematic structural diagram of a terminal device according to an exemplary embodiment, where the terminal device may be used to implement the method. The terminal device may be a mobile terminal device or a computer device, and the mobile terminal device may be a mobile phone, an iPad, and the like.
Referring to fig. 17, terminal device 1700 includes memory 1710 and processor 1720.
Processor 1720 may be a single multicore processor or may include multiple processors. In some embodiments, processor 1720 may include a general-purpose host processor and one or more special purpose coprocessors such as a Graphics Processor (GPU), Digital Signal Processor (DSP), or the like. In some embodiments, processor 1720 may be implemented using custom circuits, such as an Application Specific Integrated Circuit (ASIC) or a Field Programmable Gate Array (FPGA).
Memory 1710 may include various types of storage units, such as system memory, Read Only Memory (ROM), and permanent storage. Wherein the ROM may store static data or instructions for processor 1720 or other modules of the computer. The persistent storage device may be a read-write storage device. The persistent storage may be a non-volatile storage device that does not lose stored instructions and data even after the computer is powered off. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the permanent storage may be a removable storage device (e.g., floppy disk, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as a dynamic random access memory. The system memory may store instructions and data that some or all of the processors require at runtime. In addition, memory 1710 may include any combination of computer-readable storage media, including various types of semiconductor memory chips (DRAM, SRAM, SDRAM, flash, programmable read-only memory), magnetic and/or optical disks, among others. In some embodiments, memory 1710 may include a removable storage device that is readable and/or writable, such as a Compact Disc (CD), a digital versatile disc read only (e.g., DVD-ROM, dual layer DVD-ROM), a Blu-ray disc read only, an ultra-dense disc, a flash memory card (e.g., SD card, min SD card, Micro-SD card, etc.), a magnetic floppy disc, or the like. Computer-readable storage media do not contain carrier waves or transitory electronic signals transmitted by wireless or wired means.
Memory 1710 has stored thereon executable code that, when processed by processor 1720, can cause processor 1720 to perform the above-described methods.
The above-described method according to the present disclosure has been described in detail hereinabove with reference to the accompanying drawings.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Furthermore, the method according to the present disclosure may also be implemented as a computer program or computer program product comprising computer program code instructions for performing the above-mentioned steps defined in the above-mentioned method of the present disclosure.
Alternatively, the present disclosure may also be embodied as a non-transitory machine-readable storage medium (or computer-readable storage medium, or machine-readable storage medium) having stored thereon executable code (or a computer program, or computer instruction code) that, when executed by a processor of an electronic device (or computing device, server, or the like), causes the processor to perform the various steps of the above-described method according to the present disclosure.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.