CN114581986A - Image processing method, image processing device, electronic equipment and storage medium - Google Patents
Image processing method, image processing device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN114581986A CN114581986A CN202210216728.2A CN202210216728A CN114581986A CN 114581986 A CN114581986 A CN 114581986A CN 202210216728 A CN202210216728 A CN 202210216728A CN 114581986 A CN114581986 A CN 114581986A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- image
- model
- adjustment
- edge point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Processing Or Creating Images (AREA)
Abstract
The present disclosure relates to an image processing method, an apparatus, an electronic device, and a storage medium, the image processing method including: acquiring an image to be processed, wherein the image to be processed comprises a target object; extracting three-dimensional key points of the target object, and determining a first corresponding relation between the three-dimensional key points and vertexes in a three-dimensional adjustment model; determining an adjustment result of the three-dimensional key point according to an adjustment instruction, preset deformation data of a vertex in the three-dimensional adjustment model and the first corresponding relation; and determining a target image according to the adjustment result of the three-dimensional key point and the image to be processed. The adjustment process is more precise and controllable, the three-dimensional key points are extracted from the image to be processed, and finally the adjustment result of the three-dimensional key points obtained through adjustment is returned to the image to be processed, so that the adjustment effect of the target object in the obtained target image is natural and real, and the satisfaction degree of a user is improved.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
With the continuous development of the artificial intelligence technology, the image processing has more and more types and better effect. When people take pictures and record videos, the obtained face images can be usually beautified, so that the face is more attractive. In the related art, the face beautifying processing is completed based on the identification and adjustment of the two-dimensional information of the face image, so that the sense of reality and naturalness are poor, and the effect is difficult to satisfy the user.
Disclosure of Invention
The present disclosure provides an image processing method, apparatus, device and storage medium to solve the drawbacks of the related art.
According to a first aspect of embodiments of the present disclosure, there is provided an image processing method, including:
acquiring an image to be processed, wherein the image to be processed comprises a target object;
extracting three-dimensional key points of the target object, and determining a first corresponding relation between the three-dimensional key points and vertexes in a three-dimensional adjustment model;
determining an adjustment result of the three-dimensional key point according to an adjustment instruction, preset deformation data of a vertex in the three-dimensional adjustment model and the first corresponding relation;
and determining a target image according to the adjustment result of the three-dimensional key point and the image to be processed.
In one example, further comprising:
acquiring a second corresponding relation between a vertex in the three-dimensional adjustment model and a vertex in the three-dimensional standard model;
the determining a first corresponding relationship between the three-dimensional key point and a vertex in a three-dimensional adjustment model includes:
determining a third corresponding relation between the three-dimensional key point and a vertex in the three-dimensional standard model according to the identifier of the three-dimensional key point and the identifier of the vertex in the three-dimensional standard model;
and determining the first corresponding relation according to the second corresponding relation and the third corresponding relation.
In one example, the obtaining of the second corresponding relationship between the vertex in the three-dimensional adjusted model and the vertex in the three-dimensional standard model includes:
converting the coordinates of the vertex in the three-dimensional adjustment model and the coordinates of the vertex in the three-dimensional standard model into the same coordinate system;
determining Euclidean distance between each vertex in the three-dimensional adjustment model and each vertex in the three-dimensional standard model;
and determining the second corresponding relation according to the Euclidean distance between each vertex in the three-dimensional adjustment model and each vertex in the three-dimensional standard model.
In one example, the determining, according to the identifier of the three-dimensional key point and the identifier of the vertex in the three-dimensional standard model, a third corresponding relationship between the three-dimensional key point and the vertex in the three-dimensional standard model includes:
and determining the three-dimensional key points with the same identification and the vertexes of the three-dimensional standard model as corresponding vertex pairs to obtain the third corresponding relation.
In one example, the determining, according to the adjustment instruction, the preset deformation data of the vertex in the three-dimensional adjustment model, and the first corresponding relationship, the adjustment result of the three-dimensional key point includes:
determining an adjusting item of the target object according to the adjusting instruction;
acquiring first deformation data corresponding to the adjustment item in preset deformation data of a vertex in the three-dimensional adjustment model;
and displacing the three-dimensional key point corresponding to at least one vertex in the three-dimensional adjustment model according to the first deformation data, so as to obtain an adjustment result of the three-dimensional key point.
In one example, before the shifting the three-dimensional key point corresponding to at least one vertex in the three-dimensional adjustment model according to the first deformation data, so as to obtain the adjustment result of the three-dimensional key point, the method further includes:
determining an adjustment parameter of the adjustment item according to the adjustment instruction;
and adjusting the first deformation data according to the adjusting parameters.
In one example, the determining a target image according to the adjustment result of the three-dimensional key point and the image to be processed includes:
constructing a first grid among the adjustment results of the three-dimensional key points according to a first preset topological structure;
and rendering the adjustment result of the target object on the image to be processed according to the pixel information of the projection point of the three-dimensional key point on the image to be processed, the coordinate information of the projection point of the adjustment result of the three-dimensional key point on the image to be processed and the first grid to obtain a target image.
In one example, the rendering, according to the pixel information of the projection point of the three-dimensional key point on the image to be processed, the coordinate information of the projection point of the adjustment result of the three-dimensional key point on the image to be processed, and the first mesh, the adjustment result of the target object on the image to be processed to obtain a target image includes:
extracting pixel information of a projection point of the three-dimensional key point on the image to be processed as pixel information of an adjustment result of the three-dimensional key point;
rendering to obtain pixel information in the first grid according to the pixel information of the adjustment result of the three-dimensional key point;
and projecting the pixel information of the three-dimensional key point and the pixel information in the first grid onto the image to be processed according to the coordinate information of the projection point of the adjustment result of the three-dimensional key point on the image to be processed to obtain the target image.
In one example, further comprising:
generating a three-dimensional extended model surrounding the three-dimensional key points, wherein vertexes in the three-dimensional extended model at least comprise an adjustment result of a first edge point and a second edge point obtained based on the extension of the first edge point, and the first edge point is a three-dimensional key point corresponding to the boundary of the target object;
and rendering the three-dimensional expansion model on the image to be processed according to the three-dimensional expansion model and the image to be processed.
In one example, the generating a three-dimensional extended model surrounding the three-dimensional keypoints comprises:
acquiring a first edge point of the boundary corresponding to the target object in the three-dimensional key points;
determining a point which is separated from the first edge point by a preset distance as a second edge point corresponding to the first edge point on an extension line of a connecting line of a central point and the first edge point in the three-dimensional key points;
and according to a second preset topological structure, constructing a second grid between the adjustment result of the first edge point and the second edge point to obtain the three-dimensional expansion model.
In one example, the obtaining a first edge point of the three-dimensional key points corresponding to the boundary of the target object includes:
acquiring a projection point of the three-dimensional key point in the image to be processed;
determining a projection point on the boundary of the target object according to the projection points of the target object and the three-dimensional key point in the image to be processed;
and determining the three-dimensional key point corresponding to the projection point on the boundary of the target object as the first edge point.
In one example, the determining a projection point on a boundary of the target object according to the projection points of the target object and the three-dimensional key point in the image to be processed includes:
arranging a plurality of straight lines in a preset direction;
and determining two projection points on the boundary of the target object from the projection points of the three-dimensional key points on each straight line.
In one example, the rendering the three-dimensional extended model on the image to be processed according to the three-dimensional extended model and the image to be processed includes:
extracting projection information of a projection point of the first edge point on the image to be processed as pixel information of an adjustment result of the first edge point, and extracting pixel information of a projection point of the second edge point on the image to be processed as pixel information of the second edge point;
rendering to obtain pixel information in the second grid according to the pixel information of the adjustment result of the first edge point and the pixel information of the second edge point;
and projecting the pixel information of the adjustment result of the first edge point, the pixel information of the second edge point and the pixel information in the second grid to the image to be processed according to the adjustment result of the first edge point and the coordinate information of the projection point of the second edge point on the image to be processed.
In one example, the target object includes at least one of the following objects in the image to be processed: face, hands, limbs.
According to a second aspect of an embodiment of the present disclosure, there is provided an image processing apparatus including:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring an image to be processed, and the image to be processed comprises a target object;
the first corresponding module is used for extracting three-dimensional key points of the target object and determining a first corresponding relation between the three-dimensional key points and vertexes in a three-dimensional adjustment model;
the adjusting module is used for determining an adjusting result of the three-dimensional key point according to an adjusting instruction, preset deformation data of a vertex in the three-dimensional adjusting model and the first corresponding relation;
and the target module is used for determining a target image according to the adjustment result of the three-dimensional key point and the image to be processed.
In one example, the apparatus further comprises a second corresponding module for:
acquiring a second corresponding relation between a vertex in the three-dimensional adjustment model and a vertex in the three-dimensional standard model;
the first corresponding module is specifically configured to:
determining a third corresponding relation between the three-dimensional key point and a vertex in the three-dimensional standard model according to the identifier of the three-dimensional key point and the identifier of the vertex in the three-dimensional standard model;
and determining the first corresponding relation according to the second corresponding relation and the third corresponding relation.
In one example, the second corresponding module is specifically configured to:
converting the coordinates of the vertex in the three-dimensional adjustment model and the coordinates of the vertex in the three-dimensional standard model into the same coordinate system;
determining Euclidean distance between each vertex in the three-dimensional adjustment model and each vertex in the three-dimensional standard model;
and determining the second corresponding relation according to the Euclidean distance between each vertex in the three-dimensional adjustment model and each vertex in the three-dimensional standard model.
In an example, the first corresponding module, when determining the third corresponding relationship between the three-dimensional key point and the vertex in the three-dimensional standard model according to the identifier of the three-dimensional key point and the identifier of the vertex in the three-dimensional standard model, is specifically configured to:
and determining the three-dimensional key points with the same identification and the vertexes of the three-dimensional standard model as corresponding vertex pairs to obtain the third corresponding relation.
In one example, the adjustment module is specifically configured to:
determining an adjusting item of the target object according to the adjusting instruction;
acquiring first deformation data corresponding to the adjustment item in preset deformation data of a vertex in the three-dimensional adjustment model;
and displacing the three-dimensional key point corresponding to at least one vertex in the three-dimensional adjustment model according to the first deformation data, so as to obtain an adjustment result of the three-dimensional key point.
In one example, the adjustment module is further configured to:
before the three-dimensional key point corresponding to at least one vertex in the three-dimensional adjustment model is displaced according to the first deformation data so as to obtain an adjustment result of the three-dimensional key point, determining an adjustment parameter of the adjustment item according to the adjustment instruction;
and adjusting the first deformation data according to the adjustment parameters.
In one example, the target module is specifically configured to:
constructing a first grid among the adjustment results of the three-dimensional key points according to a first preset topological structure;
and rendering the adjustment result of the target object on the image to be processed according to the pixel information of the projection point of the three-dimensional key point on the image to be processed, the coordinate information of the projection point of the adjustment result of the three-dimensional key point on the image to be processed and the first grid to obtain a target image.
In an example, the target module is configured to, according to the pixel information of the projection point of the three-dimensional key point on the image to be processed, the coordinate information of the projection point of the adjustment result of the three-dimensional key point on the image to be processed, and the first mesh, render the adjustment result of the target object on the image to be processed, and when obtaining the target image, specifically configured to:
extracting pixel information of a projection point of the three-dimensional key point on the image to be processed as pixel information of an adjustment result of the three-dimensional key point;
rendering to obtain pixel information in the first grid according to the pixel information of the adjustment result of the three-dimensional key point;
and projecting the pixel information of the three-dimensional key point and the pixel information in the first grid onto the image to be processed according to the coordinate information of the projection point of the adjustment result of the three-dimensional key point on the image to be processed to obtain the target image.
In one example, an extension module is further included for:
generating a three-dimensional extended model surrounding the three-dimensional key points, wherein vertexes in the three-dimensional extended model at least comprise an adjustment result of a first edge point and a second edge point obtained based on the extension of the first edge point, and the first edge point is a three-dimensional key point corresponding to the boundary of the target object;
and rendering the three-dimensional expansion model on the image to be processed according to the three-dimensional expansion model and the image to be processed.
In an example, when the extension module is configured to generate a three-dimensional extension model surrounding the three-dimensional keypoint, the extension module is specifically configured to:
acquiring a first edge point of the boundary corresponding to the target object in the three-dimensional key points;
determining a point which is separated from the first edge point by a preset distance as a second edge point corresponding to the first edge point on an extension line of a connecting line of a central point and the first edge point in the three-dimensional key points;
and according to a second topological structure which is made in advance, constructing a second grid between the adjustment result of the first edge point and the second edge point to obtain the three-dimensional expansion model.
In an example, when the extension module is configured to obtain a first edge point of the boundary of the target object in the three-dimensional key points, the extension module is specifically configured to:
acquiring a projection point of the three-dimensional key point in the image to be processed;
determining a projection point on the boundary of the target object according to the projection points of the target object and the three-dimensional key point in the image to be processed;
and determining the three-dimensional key point corresponding to the projection point on the boundary of the target object as the first edge point.
In an example, when determining the projection point on the boundary of the target object according to the projection point of the target object and the three-dimensional key point in the image to be processed, the extension module is specifically configured to:
arranging a plurality of straight lines in a preset direction;
and determining two projection points on the boundary of the target object from the projection points of the three-dimensional key points on each straight line.
In an example, the extension module is configured to, when rendering the three-dimensional extension model on the image to be processed according to the three-dimensional extension model and the image to be processed, specifically:
extracting projection information of a projection point of the first edge point on the image to be processed as pixel information of an adjustment result of the first edge point, and extracting pixel information of a projection point of the second edge point on the image to be processed as pixel information of the second edge point;
rendering to obtain pixel information in the second grid according to the pixel information of the adjustment result of the first edge point and the pixel information of the second edge point;
and projecting the pixel information of the adjustment result of the first edge point, the pixel information of the second edge point and the pixel information in the second grid onto the image to be processed according to the adjustment result of the first edge point and the coordinate information of the projection point of the second edge point on the image to be processed.
In one example, the target object includes at least one of the following objects in the image to be processed:
face, hands, limbs.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device comprising a memory for storing computer instructions executable on a processor, the processor being configured to implement the method of the first aspect when executing the computer instructions.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of the first aspect.
According to the embodiment, the target image is determined by acquiring the image to be processed, extracting the three-dimensional key points of the target object in the image to be processed, determining the first corresponding relationship between the three-dimensional key points and the vertexes in the three-dimensional adjustment model of the target object, determining the adjustment result of the three-dimensional key points according to the adjustment instruction, the preset deformation data of the vertexes in the three-dimensional adjustment model and the first corresponding relationship, and finally determining the target image according to the adjustment result of the three-dimensional key points and the image to be processed. The adjustment model is three-dimensional and has preset deformation data of vertexes, and the adjustment of the target object is realized by adjusting the three-dimensional key points according to the adjustment instruction and the preset deformation data, so that the adjustment process is more precise and controllable.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart illustrating an image processing method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an interactive interface shown in one embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a standard model of a human face according to an embodiment of the present disclosure;
FIG. 4 is a flow chart illustrating determining a first correspondence according to one embodiment of the present disclosure;
FIG. 5 is a flow diagram illustrating a manner of determining a target image according to one embodiment of the present disclosure;
FIG. 6 is a flow chart illustrating an image processing method according to another embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a comprehensive model shown in an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an image processing apparatus shown in an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an electronic device shown in an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In a first aspect, at least one embodiment of the present disclosure provides an image processing method, please refer to fig. 1, which illustrates a flow of the method, including steps S101 to S104.
The method can be used for processing the image to be processed, and specifically can adjust a target object in the image to be processed, where the target object can be at least one of the following objects in the image to be processed: the adjusted items can be shapes, sizes, positions and the like. For example, when a face exists in the image to be processed, the face information of the face, namely the shape, size, proportion and the like of the five sense organs can be adjusted, so that the enhancement processing of face beautification, micro-shaping, body beautification and the like can be completed.
The image to be processed may be an image captured by the image capturing device or a frame of a video recorded by the image capturing device. It can be understood that, in a case where each frame in a video recorded by an image capturing device is used as an image to be processed and processed by the method provided in the embodiment of the present application, the processing of the video is completed.
In addition, the method may be performed by an electronic device such as a terminal device or a server, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA) handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like, and the method may be implemented by a processor calling computer readable instructions stored in a memory. Alternatively, the method may be performed by a server, which may be a local server, a cloud server, or the like.
In step S101, a to-be-processed image is acquired, wherein the to-be-processed image includes a target object.
The image to be processed may be an image shot by an image acquisition device, or a frame in a video recorded by the image acquisition device, where the image acquisition device may be an electronic device with an image acquisition function, such as a mobile phone and a camera. The target object is an image of a specific object in the real scene in the image, and may be, for example, a human face in the image to be processed (i.e., an image of the human face in the image), a human hand (i.e., an image of the human hand in the image), and the like.
The method comprises the steps that an image shot by image acquisition equipment can be identified, if the image contains a target object, the image is obtained to be used as an image to be processed, and if the image does not contain the target object, the image is not used as the image to be processed; correspondingly, the video recorded by the image acquisition equipment can be identified, if a certain frame of image contains a target object, the frame of image is acquired as an image to be processed, and if the certain frame of image does not contain the target object, the frame of image is not taken as the image to be processed.
In a possible scene, a user records a video by using a mobile phone, sometimes a face appears in a picture in the recording process, sometimes the face does not appear, an image frame with the face in the picture is used as an image to be processed for beautifying, and an image frame without the face in the picture is not used as the image to be processed for beautifying, so that the face in the video picture can be ensured to be beautified in real time, the load can be reduced when the face does not exist in the video picture, the energy consumption is saved, the video picture is prevented from being processed mistakenly, and the like.
In step S102, a three-dimensional key point of the target object is extracted, and a first corresponding relationship between the three-dimensional key point and a vertex in a three-dimensional adjustment model of the target object is determined.
In a specific implementation, there are various ways to extract the three-dimensional key points of the target object, and in one example, a depth image corresponding to the image to be processed may be obtained, and the image obtained by aligning the image to be processed and the depth image is input into a pre-trained deep neural network to perform three-dimensional key point detection. When the image acquisition device acquires an image to be processed, a sensor of structured light of a camera of the image acquisition device can be started to acquire a depth image, or a Time of Flight (ToF) sensing technology is adopted to acquire the depth image.
The detected three-dimensional key points are multiple, and each detected three-dimensional key point has an identifier, for example, if the detected three-dimensional key points have an order, the identifier of each three-dimensional key point may be an order identifier. This is because, when detecting the three-dimensional key points of the target object, the detection is performed according to the characteristics or detection requirements of the target object, that is, whether a certain position of the target object is detected or not, and the detection of several three-dimensional key points can be preset in advance. For example, when a face image is detected, three-dimensional key point detection can be performed on five sense organs and other key positions of the face, and the key positions have marks, so that the detected three-dimensional key points have marks.
The three-dimensional adjustment model is used to represent a specific adjustment manner of each adjustment item of the target object, for example, the three-dimensional adjustment model of the human face may be a beauty model, which is used to represent a specific beauty manner of each beauty item (for example, items such as a thin face, a big eye, a small mouth, etc.). The three-dimensional adjustment model may be a three-dimensional model composed of a plurality of vertices and a mesh structure between the vertices, the vertices of the three-dimensional adjustment model may also correspond to respective key positions of the target object, and the vertices are also identified, for example, if the vertices have an order, the identification of each vertex may be an order identification. The arrangement of the plurality of vertices may be the same as or different from the order of the three-dimensional keypoints obtained by the detection.
The three-dimensional key points of the target object obtained through detection correspond to the key positions of the target object, and the three-dimensional adjustment model of the target object also corresponds to the key positions of the target object, so that a first corresponding relation between the three-dimensional key points and the vertexes in the three-dimensional adjustment model is established, namely, the three-dimensional key points and the vertexes at the same key positions are corresponding.
In step S103, an adjustment result of the three-dimensional key point is determined according to an adjustment instruction, preset deformation data of a vertex in the three-dimensional adjustment model, and the first corresponding relationship.
The three-dimensional adjustment model may further store preset deformation data of each vertex, that is, displacement data of the vertex during each adjustment item. Since at least one vertex needs to be displaced for each adjustment item, displacement data of each vertex in different adjustment items can be described (note: if the vertex does not need to be displaced, the displacement data is 0). For example, the preset deformation data may be recorded in an array manner, an array is established for each vertex, the three-dimensional coordinates of the vertex are recorded in the array, and then the displacement data of the vertex under each adjustment item is recorded in sequence. Based on the recording mode of the array, after the first corresponding relation is determined, the coordinate information of the three-dimensional key point corresponding to the vertex can be converted into the coordinate system of the model and added into the array of the vertex, so that the calculation is convenient when the three-dimensional key point is adjusted subsequently.
The adjustment instruction may be generated according to an operation of a user, for example, if the user selects at least one adjustment item, the adjustment instruction for the at least one adjustment item is generated; the adjustment instructions may also be generated automatically by the terminal performing the method. When the target object is a face, the user can select at least one beauty item for the face, so as to generate a beauty instruction corresponding to the at least one beauty item. For another example, when the target object is a human face, at least one beauty item may be preset, and then, after the image to be processed is acquired each time, a beauty instruction corresponding to the at least one beauty item is automatically generated.
The adjustment process of the three-dimensional key points comprises the position adjustment process of the three-dimensional key points, so that the adjustment result of the three-dimensional key points comprises the three-dimensional key points with the adjusted positions.
In an example, when determining the adjustment result of the three-dimensional key point in this step, an adjustment item of the target object may be determined according to the adjustment instruction, then first deformation data corresponding to the adjustment item in preset deformation data of a vertex in the three-dimensional adjustment model is obtained, and finally the three-dimensional key point corresponding to at least one vertex in the three-dimensional adjustment model is displaced according to the first deformation data, so as to obtain the adjustment result of the three-dimensional key point.
The adjustment items include an adjusted position and an adjusted type, the adjusted position refers to a local position of the target object, for example, a certain five sense organs in a human face, and the adjusted type refers to an expected adjustment result, for example, turning down, turning up, rounding up, and the like. For example, when the target object is a human face, the adjustment item is a large eye, the adjustment position is a human eye, the adjustment type is large, when the adjustment item is a small mouth, the adjustment position is a mouth, and the adjustment type is small. Referring to fig. 2, an interactive interface for face beautification is shown, in which adjustment items such as a face-thinning, a large eye, a small head, a nose-enlarging and a small mouth are provided, and after a user selects at least one of the adjustment items, an adjustment instruction including the adjustment item is generated.
If the adjustment instruction has one adjustment item, the adjustment item is determined, and if the adjustment instruction has a plurality of adjustment items, the adjustment items are determined.
And first deformation data including displacement data of each vertex of the three-dimensional adjustment model under the adjustment item. Each vertex can be traversed, and deformation data corresponding to an adjustment item in the adjustment instruction, that is, displacement data of the vertex under the adjustment item, is obtained from each vertex. When each vertex stores displacement data under each adjustment item in an array form, the displacement data can be obtained according to the corresponding position from the adjustment item to the array, for example, the displacement data under the face-thinning item, the displacement data under the large-eye item and the displacement data under the small-mouth item are sequentially from the second data position to the fourth data position in the array, and when the adjustment item is the large eye, the displacement data corresponding to the large-eye item can be directly obtained from the third position in the array.
And when only one adjusting item is in the adjusting instruction, directly acquiring the deformation data corresponding to the adjusting item. When there are a plurality of adjustment items in the adjustment instruction, the deformation data corresponding to each adjustment item can be obtained. Alternatively, the three-dimensional adjustment model stores displacement data corresponding to various combinations of adjustment items in addition to displacement data of each adjustment item, and when there are a plurality of adjustment items in the adjustment command, displacement data corresponding to combinations of the plurality of adjustment items can be directly acquired. For example, when the beauty item in the beauty instruction is only a face-thinning item, the deformation data corresponding to the face-thinning item is directly acquired, when the beauty item in the beauty instruction comprises a face-thinning item and a large eye, the deformation data corresponding to the face-thinning item and the deformation data corresponding to the large eye item can be acquired respectively, and the deformation data corresponding to the combination of the face-thinning item and the large eye item can also be acquired.
In addition, the adjustment instruction may further include adjustment parameters of each adjustment item while including the adjustment item, please refer to fig. 2 again, and a scale having the adjustment parameters is provided in the interface, and the adjustment parameters of the adjustment items in the adjustment instruction may be adjusted by shifting the position of the positioning point of the scale. The adjustment parameter may be an adjustment degree of the item, and different adjustment degrees may correspond to different deformation data, for example, when the adjustment degree is 100%, the deformation data is the deformation data corresponding to the item, and when the adjustment degree is 60%, the deformation data is 60% of the deformation data corresponding to the item. Therefore, before the three-dimensional key point corresponding to at least one vertex in the three-dimensional adjustment model is displaced according to the first deformation data, so as to obtain the adjustment result of the three-dimensional key point, the adjustment parameters of the adjustment item are further determined according to the adjustment instruction, and then the first deformation data is adjusted according to the adjustment parameters. The degree of the adjusting item can be adjusted by adjusting the parameters, so that the flexibility and the diversity of adjustment are increased.
The coordinates of the three-dimensional key points corresponding to the vertex points and the displacement data corresponding to the coordinates can be calculated to obtain the adjustment result of the three-dimensional key points. Under one or more adjustment items, if the displacement data of at least one vertex is not 0, the three-dimensional key points corresponding to the vertices are displaced after operation. The preset deformation data is recorded in an array mode, after the first corresponding relation is determined, the coordinate information of the three-dimensional key points corresponding to the vertexes is converted into the coordinate system of the model and added into the array of the vertexes, and the coordinates and the displacement data of the three-dimensional key points can be directly extracted from the array of each vertex for calculation.
In a possible implementation manner, when the displacement data of a certain vertex in the acquired first displacement data includes a plurality of displacement data, and each of the displacement data is not 0, the coordinates of the three-dimensional key points corresponding to the vertex may be sequentially calculated according to each displacement data according to a preset sequence.
And after the three-dimensional key points are displaced according to at least one mode, the positions of the three-dimensional key points are adjusted, and the three-dimensional key points at the new positions are the three-dimensional key points with the adjusted positions in the adjustment result of the three-dimensional key points.
In step S104, a target image is determined according to the adjustment result of the three-dimensional key point and the image to be processed.
In this step, the corresponding position of the image to be processed is adjusted in the same way according to the adjustment result of the three-dimensional key point, that is, the adjustment result is mapped to the processed image, so that the adjustment of the target object in the image to be processed is completed. For example, when the target object is a face image, the beauty processing for the face image may be completed in this step.
According to the embodiment, the target image is determined according to the adjustment result of the three-dimensional key point and the image to be processed. The adjustment model is three-dimensional and has preset deformation data of vertexes, and the adjustment of the target object is realized by adjusting the three-dimensional key points according to the adjustment instruction and the preset deformation data, so that the adjustment process is more precise and controllable.
In some embodiments of the present disclosure, the image processing method further comprises: and acquiring a second corresponding relation between the vertex in the three-dimensional adjustment model of the target object and the vertex in the three-dimensional standard model.
The three-dimensional standard model is a standard model of the target object and is composed of a plurality of vertexes and a mesh between the vertexes. For example, when the target object is a face image, the three-dimensional standard model may be a standard face model as shown in fig. 3. If the three-dimensional key point extraction is performed on the three-dimensional standard model in the manner of step S102, all vertices of the three-dimensional standard model can be obtained. The three-dimensional adjusting model is used for representing a specific adjusting mode of at least one adjusting item of the three-dimensional standard model, namely deformation data of each position of the three-dimensional standard model in the at least one adjusting item. The three-dimensional adjustment model is provided with a special model coordinate system, the coordinates of each vertex in the model coordinate system are the coordinates in the model coordinate system, and the three-dimensional adjustment model is also provided with a coordinate transformation matrix of the model, and the coordinate transformation matrix is used for transforming the coordinates in the model coordinate system into a world coordinate system.
Therefore, the coordinates of the vertex in the three-dimensional adjustment model and the coordinates of the vertex in the three-dimensional standard model may be first converted into the same coordinate system, for example, the coordinates of the vertex in the three-dimensional standard model are the coordinates in the world coordinate system, and then the coordinates of the vertex in the three-dimensional adjustment model may be multiplied by the coordinate transformation matrix of the model, thereby converting the coordinates of the vertex into the world coordinate system. And then determining the Euclidean distance between each vertex in the three-dimensional adjustment model and each vertex in the three-dimensional standard model. And finally, determining the second corresponding relation according to the Euclidean distance between each vertex in the three-dimensional adjustment model and each vertex in the three-dimensional standard model, wherein whether the two vertexes correspond or not can be determined according to the Euclidean distance between the two vertexes and a preset distance threshold value, namely if the Euclidean distance is smaller than the distance, the two vertexes correspond, and otherwise, the two vertexes do not correspond.
Based on this, the first corresponding relationship between the three-dimensional key point and the vertex in the three-dimensional adjustment model may be determined as shown in fig. 4, including steps S401 to S402.
In step S401, a third corresponding relationship between the three-dimensional key point and the vertex in the three-dimensional standard model is determined according to the identifier of the three-dimensional key point and the identifier of the vertex in the three-dimensional standard model.
Since the three-dimensional standard model is generated based on a standard object (such as a standard face image), and the setting logics of the key points of the standard face and the target face in the standard model are the same, the positions of the vertexes in the target object in the three-dimensional standard model generated based on the standard face image are the same as the positions of the three-dimensional key points in the target object, and the identifications of the vertexes and the three-dimensional key points are the same, so that the three-dimensional key points with the same identifications and the vertexes of the three-dimensional standard model are determined as the corresponding vertex pairs to obtain the third corresponding relationship. For example, the order of the vertices of the three-dimensional standard model is consistent with the order of the three-dimensional key points of the target object, and the three-dimensional key points and the vertices of the three-dimensional standard model having the same order identifier may be determined as pairs of vertices corresponding to each other, so as to obtain the third correspondence relationship.
In step S402, the first corresponding relationship is determined according to the second corresponding relationship and the third corresponding relationship.
And determining the vertex of the three-dimensional adjustment model and the three-dimensional key point corresponding to the same vertex of the three-dimensional standard model as a mutual corresponding relation.
In the embodiment, the second corresponding relation between each vertex of the three-dimensional standard model and each vertex of the three-dimensional adjustment model is preset, and then after the three-dimensional key point of the target object of the image to be processed is obtained each time, the first corresponding relation between the three-dimensional key point and the vertex of the three-dimensional adjustment model can be simply determined by applying the identifier.
In some embodiments of the present disclosure, a target image may be determined according to an adjustment result of the three-dimensional key point and the image to be processed in a manner as shown in fig. 5, including step S501 to step S502.
In step S501, a first mesh is constructed between the adjustment results of the three-dimensional key points according to a first topology structure established in advance.
The pre-specified first topological structure refers to a connection line relation of vertexes in a mesh among vertexes of the three-dimensional standard model, and specifically, the connection line relation is required to be connected with other vertexes of each vertex. The first topology is described in terms of the identification of vertices. Since the identification of the three-dimensional key point is consistent with the identification of the vertex of the three-dimensional standard model, and the third corresponding relationship between the three-dimensional key point and the vertex of the three-dimensional standard model can be established in step S201, after at least one three-dimensional key point is displaced, the identification of each three-dimensional key point is still definite, and the corresponding relationship between each three-dimensional key point and each vertex of the three-dimensional standard model is still definite, so that different three-dimensional key point adjustment results can be connected according to the first topological structure to construct the first mesh.
And after a first grid is constructed among the adjustment results of the three-dimensional key points, a target model of the target object is formed.
In step S502, an adjustment result of the target object is rendered on the image to be processed according to the pixel information of the projection point of the three-dimensional key point on the image to be processed, the coordinate information of the projection point of the adjustment result of the three-dimensional key point on the image to be processed, and the first mesh, so as to obtain a target image.
Pixel information of a projection point of the three-dimensional key point on the image to be processed can be extracted first and used as pixel information of an adjustment result of the three-dimensional key point; then, according to the pixel information of the adjustment result of the three-dimensional key point, rendering to obtain the pixel information in the first grid; and finally, projecting the pixel information of the three-dimensional key point and the pixel information in the first grid onto the image to be processed according to the coordinate information of the projection point of the adjustment result of the three-dimensional key point on the image to be processed to obtain the target image. For example, if the target object is a face image, the target image is an image in which the face image is finished with the beauty process.
For the image to be processed, some positions of the target object in the target image are adjusted, for example, eyes in a face of the person in the image to be processed become large, a mouth becomes small, and the like, and these adjustments change pixel points in the image, for example, when the eyes become large, the pixel points of the eye part are displaced, the number of the pixel points is increased, and the shape formed by all the pixel points of the eyes is changed. In the application, the pixel points are not directly adjusted on the two-dimensional image to be processed, but the pixel points on the image to be processed are adjusted by adjusting the three-dimensional key points extracted from the two-dimensional image to be processed. That is to say, the three-dimensional key point corresponds to the projection point of the three-dimensional key point on the image to be processed, so that the adjustment result of the three-dimensional key point can be used to represent the adjustment result of the projection point corresponding to the adjustment result, that is, the adjustment result of the pixel point of the projection point, specifically, the pixel information of the projection point of the three-dimensional key point on the image to be processed can be extracted, and the pixel information is re-projected onto the projection point of the adjustment result of the three-dimensional key point on the image to be processed.
The first mesh includes a plurality of closed submeshes that are formed by at least three-dimensional keypoints. When rendering the pixel information within the first mesh, the pixel information within each sub-mesh may be rendered as pixel information of three-dimensional keypoints surrounding the sub-mesh. This is because, when extracting three-dimensional key points from an image to be processed, pixel points on the boundary of pixel regions of the same pixel information can be extracted, so that the pixel information in the pixel points is the same as the pixel points, and further the pixel information in a sub-grid formed by the three-dimensional key points corresponding to the pixel points is also the same as the pixel information of the three-dimensional key points.
In one example, the image to be processed may be rendered on an output image, for example, by an image copy operation of a rendering API such as OpenGL, and the image to be processed is copied on the output image, where the output image is a projection of pixel information for the three-dimensional key point and pixel information in the first mesh, that is, as a background of the target image; then, the image to be processed is used as an input image of a rendering process, and the coordinates of the projection point of each three-dimensional key point on the image to be processed are used as texture sampling coordinates of the input image, so that the pixel information of the projection point of the three-dimensional key point on the image to be processed can be extracted and used as the pixel information of the adjustment result of the three-dimensional key point; and then converting the coordinate of the adjustment result of the three-dimensional key point into a world coordinate system through a model transformation matrix of the three-dimensional adjustment model, converting the coordinate of the adjustment result of the three-dimensional key point in the world coordinate system into a cutting coordinate system through the world coordinate system to a transformation matrix of a cutting space, and sending the cutting coordinate system to a subsequent fragment rendering stage, namely rendering pixel information in a first grid according to the pixel information of the adjustment result of the three-dimensional key point, and rendering the adjustment result of the three-dimensional key point and the pixel information in the first grid on an image to be processed serving as an output image, so that the adjustment result of the target object is rendered on the image to be processed, and the target image is obtained.
In this embodiment, a target model of a target object is formed by constructing a first mesh between adjustment results of each three-dimensional key point; and finally, rendering the target model with the pixel information to an output image to obtain a final target image by extracting the pixel information of the adjustment result of the three-dimensional key point and rendering the pixel information in the first grid to attach the pixel information of the target object on the target model.
In some embodiments of the present disclosure, the image may also be processed in the manner as shown in fig. 6, including step S601 to step S602.
In step S601, a three-dimensional extended model surrounding the three-dimensional key point is generated, where a vertex in the three-dimensional extended model at least includes an adjustment result of a first edge point and a second edge point obtained by extending based on the first edge point, and the first edge point is a three-dimensional key point corresponding to a boundary of the target object.
In this step, a first edge point corresponding to the boundary of the target object in the three-dimensional key points may be obtained first.
Optionally, a projection point of the three-dimensional key point in the image to be processed is obtained first; determining a projection point on the boundary of the target object according to the projection points of the target object and the three-dimensional key point in the image to be processed; and finally, determining the three-dimensional key point corresponding to the projection point on the boundary of the target object as the first edge point.
For example, when determining the projection points on the boundary of the target object, a plurality of straight lines may be set in a preset direction, for example, a plurality of transverse straight lines may be set in advance, and then two projection points on the boundary may be determined among the projection points of the plurality of three-dimensional key points on each straight line.
In this step, a point spaced from the first edge point by a preset distance may be determined as a second edge point corresponding to the first edge point on an extension line of a connection line between a central point of the three-dimensional key points and the first edge point.
Optionally, the calculation may be performed according to the coordinate information of each three-dimensional key point, so as to obtain a central point of the plurality of three-dimensional key points, that is, a three-dimensional key point located at the center. Then, a straight line is constructed between the center point and each edge point, and a preset distance is further extended outwards from the straight line (the distance is preset in advance and can be set according to the size of the target object, for example, 1/10 of the width of the target object), so that the vertex of the extended line is the second edge point corresponding to the edge point.
In this step, a second mesh may be constructed between the adjustment result of the first edge point and the second edge point according to a second topology structure established in advance, so as to obtain the three-dimensional extended model.
The predetermined second topology refers to a connection relation of points in a grid between a plurality of first edge points and a plurality of second edge points, specifically, which points each point needs to be connected with. For example, all the first edge points are connected in sequence to form an inner ring of the three-dimensional expansion model; all the second edge points are sequentially connected to form an outer ring of the three-dimensional expansion model; then, each first edge point is connected with the corresponding second edge point, and is also connected with the next second edge point of the corresponding second edge point, thereby forming a second grid in the three-dimensional expansion model. The second topology is described in the order of the first edge points and the second edge points, so that when the first edge points are adjusted, the second mesh can still be constructed according to the topology.
The three-dimensional extended model is a model that surrounds the three-dimensional keypoints, the outer circle of the model (i.e., the circle surrounded by the second edge points) is unadjusted, and the inner circle of the model (i.e., the circle surrounded by the first edge points) is adjusted because the first edge points are adjusted.
In the above embodiment, it has been mentioned that, after the first mesh is constructed between the adjustment results of the three-dimensional key points, the target model of the target object is formed. After the second mesh is constructed between the adjustment result of the first edge point and the second edge point, a three-dimensional extension model is added around the target model, and the target model and the three-dimensional extension ring model are connected into a whole through the first edge point. Referring to fig. 7, a schematic diagram of the integrated model when the target object is a human face is shown.
In step S602, the three-dimensional extended model is rendered on the image to be processed according to the three-dimensional extended model and the image to be processed.
The projection information of the projection point of the first edge point on the image to be processed may be extracted first, as the pixel information of the adjustment result of the first edge point, and the pixel information of the projection point of the second edge point on the image to be processed may be extracted, as the pixel information of the second edge point; rendering to obtain pixel information in the second grid according to the pixel information of the adjustment result of the first edge point and the pixel information of the second edge point; and finally, according to the adjustment result of the first edge point and the coordinate information of the projection point of the second edge point on the image to be processed, projecting the pixel information of the adjustment result of the first edge point, the pixel information of the second edge point and the pixel information in the second grid onto the image to be processed.
The above-described operation is the same as the operation manner in step S502, and therefore the present step can be executed with reference to the operation manner in step S502. Or, before executing step S502, a three-dimensional extended model may be constructed around the target model according to the manner provided in this embodiment to obtain a comprehensive model, and then the comprehensive model is operated according to the operation manner introduced in step S502, so as to project the target object and a circle of pixels around the target object onto the image to be processed at one time to obtain the target image.
In this embodiment, by determining the second edge point surrounding the target model formed by the three-dimensional key points and further determining the three-dimensional extended model, the adjustment result of the three-dimensional extended model can be projected onto the image to be processed while the adjustment result of the target model is projected onto the image to be processed. Because the outer ring of the three-dimensional expansion model is not adjusted, the connection part between the region corresponding to the three-dimensional expansion model in the target image and the surrounding region is natural, and abrupt connection does not exist; although the inner circle of the three-dimensional expansion model is adjusted, the pixel information in the second grid in the three-dimensional expansion model is rendered, so that the connection part of the area corresponding to the three-dimensional expansion model in the target image and the area corresponding to the target object is natural and cannot be connected abruptly. In other words, the outside of the integrated model is not adjusted, so the edge area of the entire integrated model on the target image is the same as the corresponding area on the image to be processed; and the inside of the comprehensive model is adjusted, but the adjusted part overcomes the problems of incongruity, unnaturalness and the like caused by adjustment in the rendering process, so that the corresponding area of the part on the target object is natural.
In the following, a complete process of the image processing method provided by the present application is described by taking the micro-shaping processing of a human face as an example.
Firstly, importing a standard face 3D model, establishing a second corresponding relation between a vertex in a prefabricated micro-shaping model and a vertex in the standard face 3D model by adopting a Euclidean distance calculation mode, and storing the second corresponding relation into a lookup table; and then extracting three-dimensional key points of the face in the image to be processed, establishing a third corresponding relation between the three-dimensional key points with the same sequence and the vertexes in the standard face 3D model, establishing a first corresponding relation between the three-dimensional key points and the vertexes in the micro-shaping model according to the second corresponding relation and the third corresponding relation, and updating the three-dimensional key points into the lookup table array of the corresponding vertexes in the micro-shaping model according to the first corresponding relation.
Then, determining a projection point of each three-dimensional key point on the image to be processed, then constructing a transverse line on the face of the image to be processed, determining two projection points located on the boundary of the face from a plurality of projection points on each transverse direction, further determining the three-dimensional key point corresponding to the projection point located on the boundary as a first edge point, then calculating a central point in the three-dimensional key points, extending a unit length of a connecting line of the central point and the first edge point to obtain a second edge point, and then constructing a second grid between the first edge point and the second edge point to form a three-dimensional extended model.
And then, moving at least one three-dimensional key point according to the currently set micro-shaping items (such as large eyes, thin faces and the like), micro-shaping parameters (namely micro-shaping degree) and deformation data of vertexes in the micro-shaping model. Specifically, the lookup table array of each vertex in the micro-shaping model may be traversed, the deformation data of the vertex under the shaping item is multiplied by the micro-shaping parameter to obtain the displacement data of the vertex, and the displacement data of the vertex is used to displace the three-dimensional key point in the lookup table data set, so as to obtain the displacement result of the three-dimensional key point. And constructing a first grid among the displacement results of the three-dimensional key points, and connecting the first grid and the second grid into a whole through the first edge points to obtain the comprehensive model.
And finally, pasting the image to be processed on the comprehensive model, and rendering the image to be processed to obtain a target image for presenting the face micro-shaping effect. Specifically, the image to be processed may be rendered on the output image, for example, by an image copy operation of a rendering API such as OpenGL, the image to be processed may be copied on the output image; then, taking the image to be processed as an input image of a rendering process, and taking the coordinates of the projection point of each three-dimensional key point on the image to be processed as texture sampling coordinates of the input image; and then converting the coordinates of the adjustment result of the three-dimensional key points into a world coordinate system through a model transformation matrix of the three-dimensional adjustment model, converting the coordinates of the adjustment result of the three-dimensional key points in the world coordinate system into a cutting coordinate system through a transformation matrix from the world coordinate system to a cutting space, and sending the coordinates to a subsequent fragment rendering stage to realize the presentation of a real-time 3D micro-shaping effect.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus, referring to fig. 8, which shows a structure of the apparatus, including:
an obtaining module 801, configured to obtain an image to be processed, where the image to be processed includes a target object;
a first corresponding module 802, configured to extract a three-dimensional key point of the target object, and determine a first corresponding relationship between the three-dimensional key point and a vertex in a three-dimensional adjustment model;
an adjusting module 803, configured to determine an adjustment result of the three-dimensional key point according to an adjustment instruction, preset deformation data of a vertex in the three-dimensional adjustment model, and the first corresponding relationship;
and the target module 804 is used for determining a target image according to the adjustment result of the three-dimensional key point and the image to be processed.
In some embodiments of the present disclosure, a second corresponding module is further included for:
acquiring a second corresponding relation between a vertex in the three-dimensional adjustment model and a vertex in the three-dimensional standard model;
the first corresponding module is specifically configured to:
determining a third corresponding relation between the three-dimensional key point and a vertex in the three-dimensional standard model according to the identifier of the three-dimensional key point and the identifier of the vertex in the three-dimensional standard model;
and determining the first corresponding relation according to the second corresponding relation and the third corresponding relation.
In some embodiments of the disclosure, the second corresponding module is specifically configured to:
converting the coordinates of the vertex in the three-dimensional adjustment model and the coordinates of the vertex in the three-dimensional standard model into the same coordinate system;
determining Euclidean distance between each vertex in the three-dimensional adjustment model and each vertex in the three-dimensional standard model;
and determining the second corresponding relation according to the Euclidean distance between each vertex in the three-dimensional adjustment model and each vertex in the three-dimensional standard model.
In some embodiments of the present disclosure, the first corresponding module, when determining the third corresponding relationship between the three-dimensional key point and the vertex in the three-dimensional standard model according to the identifier of the three-dimensional key point and the identifier of the vertex in the three-dimensional standard model, is specifically configured to:
and determining the three-dimensional key points with the same identification and the vertexes of the three-dimensional standard model as corresponding vertex pairs to obtain the third corresponding relation.
In some embodiments of the present disclosure, the adjusting module is specifically configured to:
determining an adjusting item of the target object according to the adjusting instruction;
acquiring first deformation data corresponding to the adjustment item in preset deformation data of a vertex in the three-dimensional adjustment model;
and displacing the three-dimensional key point corresponding to at least one vertex in the three-dimensional adjustment model according to the first deformation data, so as to obtain an adjustment result of the three-dimensional key point.
In some embodiments of the present disclosure, the adjustment module is further configured to:
before the three-dimensional key point corresponding to at least one vertex in the three-dimensional adjustment model is displaced according to the first deformation data so as to obtain an adjustment result of the three-dimensional key point, determining an adjustment parameter of the adjustment item according to the adjustment instruction;
and adjusting the first deformation data according to the adjusting parameters.
In some embodiments of the present disclosure, the target module is specifically configured to:
constructing a first grid among the adjustment results of the three-dimensional key points according to a first preset topological structure;
and rendering the adjustment result of the target object on the image to be processed according to the pixel information of the projection point of the three-dimensional key point on the image to be processed, the coordinate information of the projection point of the adjustment result of the three-dimensional key point on the image to be processed and the first grid to obtain a target image.
In some embodiments of the present disclosure, the target module is configured to render the adjustment result of the target object on the image to be processed according to the pixel information of the projection point of the three-dimensional key point on the image to be processed, the coordinate information of the projection point of the adjustment result of the three-dimensional key point on the image to be processed, and the first mesh, and when obtaining the target image, is specifically configured to:
extracting pixel information of a projection point of the three-dimensional key point on the image to be processed as pixel information of an adjustment result of the three-dimensional key point;
rendering to obtain pixel information in the first grid according to the pixel information of the adjustment result of the three-dimensional key point;
and projecting the pixel information of the three-dimensional key point and the pixel information in the first grid onto the image to be processed according to the coordinate information of the projection point of the adjustment result of the three-dimensional key point on the image to be processed to obtain the target image.
In some embodiments of the present disclosure, an extension module is further included for:
generating a three-dimensional extended model surrounding the three-dimensional key points, wherein vertexes in the three-dimensional extended model at least comprise an adjustment result of a first edge point and a second edge point obtained based on the extension of the first edge point, and the first edge point is a three-dimensional key point corresponding to the boundary of the target object;
and rendering the three-dimensional expansion model on the image to be processed according to the three-dimensional expansion model and the image to be processed.
In some embodiments of the present disclosure, when the extension module is configured to generate a three-dimensional extension model surrounding the three-dimensional keypoint, it is specifically configured to:
acquiring a first edge point of the boundary corresponding to the target object in the three-dimensional key points;
determining a point which is separated from the first edge point by a preset distance as a second edge point corresponding to the first edge point on an extension line of a connecting line of a central point and the first edge point in the three-dimensional key points;
and according to a second preset topological structure, constructing a second grid between the adjustment result of the first edge point and the second edge point to obtain the three-dimensional expansion model.
In some embodiments of the present disclosure, when the extension module is configured to obtain a first edge point, corresponding to the boundary of the target object, of the three-dimensional key points, the extension module is specifically configured to:
acquiring a projection point of the three-dimensional key point in the image to be processed;
determining a projection point on the boundary of the target object according to the projection points of the target object and the three-dimensional key point in the image to be processed;
and determining the three-dimensional key point corresponding to the projection point on the boundary of the target object as the first edge point.
In some embodiments of the present disclosure, when determining the projection point on the boundary of the target object according to the projection point of the target object and the three-dimensional key point in the image to be processed, the extension module is specifically configured to:
arranging a plurality of straight lines in a preset direction;
and determining two projection points on the boundary of the target object from the projection points of the three-dimensional key points on each straight line.
In some embodiments of the present disclosure, the extension module is configured to, when rendering the three-dimensional extension model on the image to be processed according to the three-dimensional extension model and the image to be processed, specifically:
extracting projection information of a projection point of the first edge point on the image to be processed as pixel information of an adjustment result of the first edge point, and extracting pixel information of a projection point of the second edge point on the image to be processed as pixel information of the second edge point;
rendering to obtain pixel information in the second grid according to the pixel information of the adjustment result of the first edge point and the pixel information of the second edge point;
and projecting the pixel information of the adjustment result of the first edge point, the pixel information of the second edge point and the pixel information in the second grid onto the image to be processed according to the adjustment result of the first edge point and the coordinate information of the projection point of the second edge point on the image to be processed.
In some embodiments of the present disclosure, the target object includes at least one of the following objects in the image to be processed: face, hands, limbs.
With regard to the apparatus in the above-mentioned embodiment, the specific manner in which each module performs the operation has been described in detail in the third aspect with respect to the embodiment of the method, and will not be elaborated here.
In a third aspect, at least one embodiment of the present disclosure provides an apparatus, please refer to fig. 9, which illustrates a structure of the apparatus, the apparatus includes a memory for storing computer instructions executable on a processor, and the processor is configured to process an image based on the method according to any one of the first aspect when the computer instructions are executed.
In a fourth aspect, at least one embodiment of the disclosure provides a computer readable storage medium having a computer program stored thereon, which when executed by a processor, performs the method of any of the first aspects.
The disclosure relates to the field of augmented reality, and aims to detect or identify relevant features, states and attributes of a target object by means of various visual correlation algorithms by acquiring image information of the target object in a real environment, so as to obtain an AR effect combining virtual and reality matched with specific applications. For example, the target object may relate to a face, a limb, a gesture, an action, etc. associated with a human body, or a marker, a marker associated with an object, or a sand table, a display area, a display item, etc. associated with a venue or a place. The vision-related algorithms may involve visual localization, SLAM, three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like. The detection or identification processing of the relevant characteristics, states and attributes of the target object can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
In this disclosure, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The term "plurality" means two or more unless expressly limited otherwise.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (13)
1. An image processing method, comprising:
acquiring an image to be processed, wherein the image to be processed comprises a target object;
extracting a first edge point of the target object, and determining a first corresponding relation between the first edge point and a vertex in a three-dimensional adjustment model, wherein the first edge point is a three-dimensional key point which is positioned on the boundary of the target object in the three-dimensional key points of the target object;
determining an adjustment result of the first edge point according to an adjustment instruction, preset deformation data of a vertex in the three-dimensional adjustment model and the first corresponding relation;
generating a three-dimensional expansion model surrounding the first edge point, wherein a vertex in the three-dimensional expansion model at least comprises an adjustment result of the first edge point and a second edge point obtained based on the expansion of the first edge point;
and rendering the three-dimensional expansion model on the image to be processed according to the three-dimensional expansion model and the image to be processed to obtain a target image.
2. The image processing method according to claim 1, wherein the extracting the first edge point of the target object includes:
extracting three-dimensional key points of the target object;
acquiring a projection point of the three-dimensional key point in the image to be processed;
determining a projection point on the boundary of the target object according to the projection points of the target object and the three-dimensional key point in the image to be processed;
and determining the three-dimensional key point corresponding to the projection point on the boundary of the target object as the first edge point.
3. The image processing method according to claim 2, wherein the determining the projection point on the boundary of the target object according to the projection points of the target object and the three-dimensional key point in the image to be processed comprises:
arranging a plurality of straight lines in a preset direction;
and determining two projection points on the boundary of the target object from the projection points of the three-dimensional key points on each straight line.
4. The image processing method according to claim 1, further comprising:
acquiring a second corresponding relation between the vertex in the three-dimensional adjustment model and the vertex in the three-dimensional standard model;
the determining a first corresponding relationship between the first edge point and a vertex in the three-dimensional adjustment model includes:
determining a third corresponding relation between the first edge point and a vertex in the three-dimensional standard model according to the identifier of the first edge point and the identifier of the vertex in the three-dimensional standard model;
and determining the first corresponding relation according to the second corresponding relation and the third corresponding relation.
5. The image processing method according to claim 4, wherein the obtaining of the second corresponding relationship between the vertices in the three-dimensional adjusted model and the vertices in the three-dimensional standard model comprises:
converting the coordinates of the vertex in the three-dimensional adjustment model and the coordinates of the vertex in the three-dimensional standard model into the same coordinate system;
determining Euclidean distance between each vertex in the three-dimensional adjustment model and each vertex in the three-dimensional standard model;
and determining the second corresponding relation according to the Euclidean distance between each vertex in the three-dimensional adjustment model and each vertex in the three-dimensional standard model.
6. The method according to claim 4, wherein determining the third correspondence between the first edge point and the vertex in the three-dimensional standard model according to the identifier of the first edge point and the identifier of the vertex in the three-dimensional standard model comprises:
and determining the first edge points with the same identification and the vertexes of the three-dimensional standard model as corresponding vertex pairs to obtain the third corresponding relation.
7. The image processing method according to any one of claims 1 to 6, wherein the determining an adjustment result of the first edge point according to the adjustment instruction, the preset deformation data of the vertex in the three-dimensional adjustment model, and the first corresponding relationship comprises:
determining an adjusting item of the target object according to the adjusting instruction;
acquiring first deformation data corresponding to the adjustment item in preset deformation data of a vertex in the three-dimensional adjustment model;
and displacing the first edge point corresponding to at least one vertex in the three-dimensional adjustment model according to the first deformation data to obtain an adjustment result of the first edge point.
8. The image processing method according to claim 7, before the shifting the first edge point corresponding to at least one vertex in the three-dimensional adjustment model according to the first deformation data to obtain the adjustment result of the first edge point, further comprising:
determining an adjusting parameter of the adjusting item according to the adjusting instruction;
and adjusting the first deformation data according to the adjusting parameters.
9. The image processing method according to claim 2, wherein the generating a three-dimensional extended model surrounding the first edge point comprises:
determining a point which is separated from the first edge point by a preset distance as a second edge point corresponding to the first edge point on an extension line of a connecting line of a central point of the three-dimensional key point and the first edge point;
and according to a second topological structure which is made in advance, constructing a second grid between the adjustment result of the first edge point and the second edge point to obtain the three-dimensional expansion model.
10. The image processing method according to claim 1, wherein said rendering the three-dimensional extended model on the image to be processed according to the three-dimensional extended model and the image to be processed comprises:
extracting projection information of a projection point of the first edge point on the image to be processed as pixel information of an adjustment result of the first edge point, and extracting pixel information of a projection point of the second edge point on the image to be processed as pixel information of the second edge point;
rendering to obtain pixel information in the second grid according to the pixel information of the adjustment result of the first edge point and the pixel information of the second edge point;
and projecting the pixel information of the adjustment result of the first edge point, the pixel information of the second edge point and the pixel information in the second grid onto the image to be processed according to the adjustment result of the first edge point and the coordinate information of the projection point of the second edge point on the image to be processed.
11. An image processing apparatus characterized by comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring an image to be processed, and the image to be processed comprises a target object;
the first corresponding module is used for extracting a first edge point of the target object and determining a first corresponding relation between the first edge point and a vertex in a three-dimensional adjustment model, wherein the first edge point is a three-dimensional key point which is positioned on the boundary of the target object in the three-dimensional key points of the target object;
the adjusting module is used for determining an adjusting result of the first edge point according to an adjusting instruction, preset deformation data of a vertex in the three-dimensional adjusting model and the first corresponding relation;
the expansion module is used for generating a three-dimensional expansion model surrounding the first edge point, wherein a vertex in the three-dimensional expansion model at least comprises an adjustment result of the first edge point and a second edge point obtained by expansion based on the first edge point;
and the target module is used for rendering the three-dimensional expansion model on the image to be processed according to the three-dimensional expansion model and the image to be processed to obtain a target image.
12. An electronic device, comprising a memory for storing computer instructions executable on a processor, the processor being configured to implement the method of any one of claims 1 to 10 when executing the computer instructions.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210216728.2A CN114581986A (en) | 2021-10-20 | 2021-10-20 | Image processing method, image processing device, electronic equipment and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111222848.5A CN113657357B (en) | 2021-10-20 | 2021-10-20 | Image processing method, image processing device, electronic equipment and storage medium |
CN202210216728.2A CN114581986A (en) | 2021-10-20 | 2021-10-20 | Image processing method, image processing device, electronic equipment and storage medium |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111222848.5A Division CN113657357B (en) | 2021-10-20 | 2021-10-20 | Image processing method, image processing device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114581986A true CN114581986A (en) | 2022-06-03 |
Family
ID=78494756
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210216728.2A Pending CN114581986A (en) | 2021-10-20 | 2021-10-20 | Image processing method, image processing device, electronic equipment and storage medium |
CN202111222848.5A Active CN113657357B (en) | 2021-10-20 | 2021-10-20 | Image processing method, image processing device, electronic equipment and storage medium |
CN202210216729.7A Pending CN114581987A (en) | 2021-10-20 | 2021-10-20 | Image processing method, image processing device, electronic equipment and storage medium |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111222848.5A Active CN113657357B (en) | 2021-10-20 | 2021-10-20 | Image processing method, image processing device, electronic equipment and storage medium |
CN202210216729.7A Pending CN114581987A (en) | 2021-10-20 | 2021-10-20 | Image processing method, image processing device, electronic equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (3) | CN114581986A (en) |
WO (1) | WO2023066120A1 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114581986A (en) * | 2021-10-20 | 2022-06-03 | 北京市商汤科技开发有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN114648762B (en) * | 2022-03-18 | 2024-11-26 | 腾讯科技(深圳)有限公司 | Semantic segmentation method, device, electronic device and computer-readable storage medium |
CN115019021A (en) * | 2022-06-02 | 2022-09-06 | 北京字跳网络技术有限公司 | Image processing method, device, equipment and storage medium |
CN115239860B (en) * | 2022-09-01 | 2023-08-01 | 北京达佳互联信息技术有限公司 | Expression data generation method and device, electronic equipment and storage medium |
CN115409951B (en) * | 2022-10-28 | 2023-03-24 | 北京百度网讯科技有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108550185A (en) * | 2018-05-31 | 2018-09-18 | Oppo广东移动通信有限公司 | Beautifying faces treating method and apparatus |
CN108876708B (en) * | 2018-05-31 | 2022-10-25 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN108765273B (en) * | 2018-05-31 | 2021-03-09 | Oppo广东移动通信有限公司 | Virtual cosmetic surgery method and device for photographing faces |
CN108447017B (en) * | 2018-05-31 | 2022-05-13 | Oppo广东移动通信有限公司 | Face virtual face-lifting method and device |
CN109190503A (en) * | 2018-08-10 | 2019-01-11 | 珠海格力电器股份有限公司 | beautifying method, device, computing device and storage medium |
CN109584151B (en) * | 2018-11-30 | 2022-12-13 | 腾讯科技(深圳)有限公司 | Face beautifying method, device, terminal and storage medium |
CN111985265B (en) * | 2019-05-21 | 2024-04-12 | 华为技术有限公司 | Image processing method and device |
CN110675489B (en) * | 2019-09-25 | 2024-01-23 | 北京达佳互联信息技术有限公司 | Image processing method, device, electronic equipment and storage medium |
CN111368678B (en) * | 2020-02-26 | 2023-08-25 | Oppo广东移动通信有限公司 | Image processing method and related device |
CN113379623B (en) * | 2021-05-31 | 2023-12-19 | 北京达佳互联信息技术有限公司 | Image processing method, device, electronic equipment and storage medium |
CN114581986A (en) * | 2021-10-20 | 2022-06-03 | 北京市商汤科技开发有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
-
2021
- 2021-10-20 CN CN202210216728.2A patent/CN114581986A/en active Pending
- 2021-10-20 CN CN202111222848.5A patent/CN113657357B/en active Active
- 2021-10-20 CN CN202210216729.7A patent/CN114581987A/en active Pending
-
2022
- 2022-10-13 WO PCT/CN2022/125036 patent/WO2023066120A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
WO2023066120A1 (en) | 2023-04-27 |
CN113657357A (en) | 2021-11-16 |
CN114581987A (en) | 2022-06-03 |
CN113657357B (en) | 2022-02-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113657357B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN113012282B (en) | Three-dimensional human body reconstruction method, device, equipment and storage medium | |
KR101424942B1 (en) | A system and method for 3D space-dimension based image processing | |
CN108305312B (en) | Method and device for generating 3D virtual image | |
KR101560508B1 (en) | Method and arrangement for 3-dimensional image model adaptation | |
US20100315424A1 (en) | Computer graphic generation and display method and system | |
WO2019035155A1 (en) | Image processing system, image processing method, and program | |
JP2006053694A (en) | Space simulator, space simulation method, space simulation program, recording medium | |
CN112784621B (en) | Image display method and device | |
JP2010109783A (en) | Electronic camera | |
US10726612B2 (en) | Method and apparatus for reconstructing three-dimensional model of object | |
US20200349754A1 (en) | Methods, devices and computer program products for generating 3d models | |
EP4036863A1 (en) | Human body model reconstruction method and reconstruction system, and storage medium | |
CN108111911B (en) | Video data real-time processing method and device based on self-adaptive tracking frame segmentation | |
CN113516755B (en) | Image processing method, image processing apparatus, electronic device, and storage medium | |
CN113628327A (en) | Head three-dimensional reconstruction method and equipment | |
CN113593001A (en) | Target object three-dimensional reconstruction method and device, computer equipment and storage medium | |
US20170064284A1 (en) | Producing three-dimensional representation based on images of a person | |
CN113706373A (en) | Model reconstruction method and related device, electronic equipment and storage medium | |
CN110458924A (en) | A kind of three-dimensional facial model method for building up, device and electronic equipment | |
CN110675413B (en) | Three-dimensional face model construction method and device, computer equipment and storage medium | |
CN115239861A (en) | Face data enhancement method and device, computer equipment and storage medium | |
CN110070481B (en) | Image generation method, device, terminal and storage medium for virtual object of face | |
CN114821675A (en) | Object handling method, system and processor | |
CN107066095B (en) | Information processing method and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |