CN111462309B - Modeling method and device for three-dimensional head, terminal equipment and storage medium - Google Patents
Modeling method and device for three-dimensional head, terminal equipment and storage medium Download PDFInfo
- Publication number
- CN111462309B CN111462309B CN202010243821.3A CN202010243821A CN111462309B CN 111462309 B CN111462309 B CN 111462309B CN 202010243821 A CN202010243821 A CN 202010243821A CN 111462309 B CN111462309 B CN 111462309B
- Authority
- CN
- China
- Prior art keywords
- gray code
- code pattern
- human head
- preset direction
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000010276 construction Methods 0.000 claims abstract description 16
- 238000004590 computer program Methods 0.000 claims description 19
- 230000003287 optical effect Effects 0.000 claims description 9
- 239000000470 constituent Substances 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 5
- 230000008569 process Effects 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000000712 assembly Effects 0.000 description 2
- 238000000429 assembly Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Abstract
The application is applicable to the technical field of computers, and provides a modeling method, a device, terminal equipment and a storage medium of a three-dimensional human head, which comprise the following steps: acquiring a construction surface and RGB images respectively corresponding to a target user in a plurality of preset directions, wherein the RGB image corresponding to each preset direction comprises color values of all coordinate points in the human head point cloud corresponding to the preset direction; splicing the structural surfaces in a plurality of preset directions to obtain an initial human head model of the target user; and performing color filling on the initial human head model according to the color values on the RGB image to obtain the three-dimensional human head model of the target user. The model is not complete due to leakage or flying spots of the model, the model precision is improved, and the three-dimensional human head model has a better display effect.
Description
Technical Field
The application belongs to the technical field of computers, and particularly relates to a modeling method and device for a three-dimensional human head, terminal equipment and a storage medium.
Background
With the development of technology, three-dimensional human heads have very wide application, such as glasses try-on based on three-dimensional human heads, and the like. At present, a three-dimensional human head mainly rotates the human head or rotates a camera, the camera acquires image information of the human head at different angles, a human head model is built according to the image information, however, the model is incomplete due to the fact that the image information of part of angles is missing in the human head model, and model accuracy is poor.
Disclosure of Invention
The embodiment of the application provides a modeling method, a modeling device, terminal equipment and a storage medium for a three-dimensional human head, which can solve the problem of poor precision of a three-dimensional human head model.
In a first aspect, an embodiment of the present application provides a method for modeling a three-dimensional human head, including:
acquiring a construction surface and RGB images which correspond to the target user respectively in a plurality of preset directions, wherein the construction surface is formed by head point clouds of the target user, and each RGB image corresponding to each preset direction comprises color values of all coordinate points in the head point clouds corresponding to the preset direction;
splicing the structural surfaces in a plurality of preset directions to obtain an initial human head model of the target user;
and performing color filling on the initial human head model according to the color values corresponding to each coordinate point in the human head point cloud on the RGB image to obtain the three-dimensional human head model of the target user.
According to the method and the device, the position of the human head in the three-dimensional space is determined by acquiring the human head point cloud and the RGB image of the structural surface corresponding to the target user in the preset directions, so that the spatial positioning of the three-dimensional human head is realized, incomplete model caused by model missing points or flying points is avoided, and model precision is improved; splicing the structural surfaces in the preset directions to obtain an initial human head model of the target user, and performing color filling on the initial human head model according to the color values of the pixel points on the RGB image to obtain a three-dimensional human head model with RGB information, so that the display effect of the three-dimensional human head model is better.
In a second aspect, an embodiment of the present application provides a modeling apparatus for a three-dimensional human head, including:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a construction surface and an RGB image which respectively correspond to a target user in a plurality of preset directions, wherein the construction surface is formed by a human head point cloud of the target user, and each RGB image corresponding to each preset direction comprises a color value of each coordinate point in the human head point cloud corresponding to the preset direction;
the splicing module is used for splicing the structural surfaces in a plurality of preset directions to obtain an initial human head model of the target user;
and the filling module is used for carrying out color filling on the initial human head model according to the color values corresponding to each coordinate point in the human head point cloud on the RGB image to obtain the three-dimensional human head model of the target user.
In a third aspect, an embodiment of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the method for modeling a three-dimensional human head according to any one of the first aspects when the processor executes the computer program.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program which, when executed by a processor, implements a method of modeling a three-dimensional human head as described in any one of the first aspects above.
In a fifth aspect, embodiments of the present application provide a computer program product, which when run on a terminal device, causes the terminal device to perform the method of modeling a three-dimensional human head according to any one of the first aspects above.
It will be appreciated that the advantages of the second to fifth aspects may be found in the relevant description of the first aspect, and are not described here again.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a modeling method for a three-dimensional human head according to an embodiment of the present application;
FIG. 2 is a flow chart of a modeling method for a three-dimensional human head according to another embodiment of the present application;
FIG. 3 is a flow chart of a modeling method for a three-dimensional human head according to another embodiment of the present application;
FIG. 4 is a schematic diagram of Gray code patterns according to an embodiment of the present application;
FIG. 5 is a schematic view of an acquisition assembly provided in another embodiment of the present application;
FIG. 6 is a schematic structural diagram of a modeling apparatus for a three-dimensional human head provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a terminal device provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
As described in the related art, the three-dimensional human head is modeled mainly by rotating a camera to acquire head information, but the model thus established has poor accuracy. The head information of different angles is obtained by rotating the head of a person or a rotating camera, and a head model is built according to the head information, so that a great amount of time is required to obtain the head information, and the built model usually has the problem of leakage points or flying spots, so that the parts of the model such as the back brain, the top of the head and the like are incomplete.
Therefore, the modeling method for the three-dimensional human head is provided, the position of the human head in the three-dimensional space is determined, and the space positioning of the three-dimensional human head is realized, so that incomplete model caused by model leakage points or flying points is avoided, and the model precision is improved; and performing color filling on the initial human head model to obtain a three-dimensional human head model with RGB information, so that the three-dimensional human head model has a better display effect.
Fig. 1 shows a schematic flowchart of a three-dimensional human head modeling method provided in the present application, which may be applied to a terminal device including, but not limited to, a mobile phone, a tablet computer, a wearable device, an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook or a personal digital assistant (personal digital assistant, PDA), a server, and the like, by way of example and not limitation, in the specific type of the terminal device.
S101, acquiring a construction surface and RGB images, wherein the construction surface and the RGB images respectively correspond to each other in a plurality of preset directions, the construction surface is formed by head point clouds of a target user, and each RGB image corresponding to each preset direction comprises color values of all coordinate points in the head point clouds corresponding to the preset direction;
in S101, the preset direction is a direction that can ensure complete positioning of the head of the target user, and the preset direction may include a front direction, and a front direction with the face of the target user as a front direction. It should be appreciated that in other embodiments, the plurality of preset directions may be a combination of directions thereof. The above-mentioned construction surface is a surface formed by a plurality of adjacent coordinates in the human head point cloud, such as a triangular surface formed by three adjacent coordinate points and a four-angular surface formed by four adjacent coordinate points. The human head point cloud is a coordinate point set formed by coordinate points of human head characteristic points of a target user in a three-dimensional space, and each human head point cloud in a preset direction at least corresponds to one RGB image.
In an embodiment, the head point cloud of the target user's head can be acquired through a three-dimensional scanner in a plurality of preset directions, and the RGB image of the target user's head can be acquired through a camera. In another embodiment, a projector and a camera with fixed positions can be installed in a plurality of preset directions, the projector projects continuous gray code patterns (fig. 4 is a gray code pattern) to the head of a target user, the head images with the gray code patterns are collected through a plurality of cameras, coordinate points of head feature points corresponding to each pixel point on the gray pattern in a three-dimensional space are determined according to deformation conditions of the gray code patterns, head point clouds are formed according to the plurality of coordinate points, and head images with RGB information are collected in each preset direction through the cameras.
Further, since the gray code pattern and the RGB pattern are acquired by the cameras with the same resolution and fixed positions, the gray code pattern is the same as the pixel points of the RGB image, and the neighborhood relation of the pixel points is saved. And determining coordinate points of the human head point cloud corresponding to the pixel points according to the Gray code pattern, namely, determining that the pixel points on the RGB image have corresponding relations with the coordinate points of the human head point cloud, thereby determining the neighborhood relation between the coordinate points. Therefore, in order to reduce the calculation amount, the triangular surface of the initial human head model is convenient to construct later and the initial human head model is filled with colors, and the corresponding relation between the pixel points of the RGB image and the coordinate points of the human head point cloud is saved.
According to the embodiment, the human head point clouds in a plurality of preset directions are obtained, so that the human head characteristic points are positioned in a three-dimensional space, and the problem of model leakage points is avoided; the three-dimensional head has color information by acquiring the RGB image, and has better display effect.
S102, splicing the structural surfaces in a plurality of preset directions to obtain an initial human head model of a target user;
in S102, the stitching is performed to connect coordinate points of the plurality of planes on the boundary, which includes, but is not limited to, coordinate point connection, coordinate point de-duplication, and coordinate point fusion. The initial human head model is a grid formed by splicing a plurality of human head point clouds into a complete human head point cloud.
Optionally, detecting whether a coordinate point with a coincident position exists between a coordinate point on a human head point cloud boundary in one preset direction and a coordinate point on a human head point cloud boundary in another adjacent preset direction, if the coordinate point with the coincident position exists, carrying out mean value calculation on a plurality of coordinate points with the coincident position to obtain a mean value coordinate point, deleting the plurality of coordinate points with the coincident position, and taking the mean value coordinate point as a new coordinate point on a corresponding position; and if no coordinate points with overlapped positions exist, connecting the adjacent coordinate points on the two boundaries. It should be understood that, in the embodiment, the positions are overlapped such that the distances between two or more coordinate points are within the preset distance range, and then the positions are determined to be overlapped.
According to the embodiment, the overlapping coordinate points are removed by splicing the human head point clouds, fusion is carried out on the coordinate points, the flying spot problem of the model is reduced, and the model precision is improved.
And S103, performing color filling on the initial human head model according to the color values on the RGB image to obtain the three-dimensional human head model of the target user.
In S103, according to the correspondence between the pixel points on the RGB image and the coordinate points in the human head point cloud, obtaining a color value of each coordinate point on the initial human head model; and filling the initial human head model according to the color value of each coordinate point on the initial human head model to obtain the three-dimensional human head model of the target user.
Optionally, if the gray code patterns of the human head are collected by the multiple cameras, color filling may be performed on the coordinate points on the initial human head model according to the coordinate points of the human head feature points corresponding to each pixel point on the gray code patterns determined in step S101 in the three-dimensional space and the correspondence between the pixel points of the gray code patterns and the pixel points of the RGB images.
According to the embodiment, the color value of the coordinate point of the initial human head model is directly determined according to the corresponding relation between the pixel point on the RGB image and the coordinate point of the human head point cloud, and the calculation of a complex position relation is not needed, so that the calculation amount of a computer is reduced.
Fig. 2 is a schematic flow chart of another modeling method of a three-dimensional human head according to an embodiment of the present application. The steps that are the same as those in fig. 1 are not described in detail here.
In one possible implementation manner, S101 includes S1011 and S1012:
s1011, acquiring human head point clouds and RGB images which respectively correspond to a target user in a plurality of preset directions, wherein the human head point clouds comprise a plurality of coordinate points with neighborhood relations;
s1012, constructing a plurality of construction surfaces corresponding to each preset direction according to the neighborhood relation of a plurality of coordinate points of the human head point cloud in each preset direction.
In the above S1011 and S1012, the neighborhood relation is a positional neighboring relation of the coordinate point and other coordinates, and the neighborhood relation of each coordinate point may be determined according to the coordinates of all the coordinate points. The above-mentioned structure surface may be a triangular surface formed by three adjacent coordinate points, or may be a four-angular surface formed by four adjacent coordinate points.
Optionally, S1011 described above includes S201 to S203:
s201, acquiring Gray code patterns projected on the head of a target user and RGB images of the head of the target user from a plurality of preset directions;
in S201 described above, the gray code pattern may be projected onto the head of the target user by the projector, and the gray code pattern and the RGB image on the head of the target user may be acquired by the camera. Optionally, the projector is moved according to a preset movement track, the projector projects the gray code pattern in the movement process, and the camera moves along with the movement of the projector according to the same movement track as the projector and acquires the gray code pattern and the RGB pattern.
Optionally, the S201 specifically includes S2011 to S2013:
s2011, projecting preset continuous Gray code patterns from a plurality of preset directions to the head of a target user;
s2012, respectively acquiring continuous Gray code patterns on the head of a target user through two image pickup devices in each preset direction, wherein the two image pickup devices are respectively arranged on two sides of the projection direction of the Gray code patterns;
s2013, collecting RGB images of the head of the target user in each preset direction.
In the above-described S2011 to S2013, the continuous gray code pattern indicates a plurality of gray code patterns arranged according to a time sequence. In order to avoid the problems of leakage points and flying spots of the model, the accuracy of the model is improved. As shown in fig. 5, the projector 01, the camera 02 and the camera 03 are combined into an acquisition assembly, wherein the projector is at the midpoint of the two camera lines. And a collection assembly is fixedly installed in each preset direction, such as right front, right above, right left and right of the head of a person. In each preset direction, a projector projects continuous Gray code patterns, and two cameras collect the Gray code patterns and RGB images.
Further, in order to ensure that the gray code pattern projected by the projector is collected by the camera, after delaying for a preset time period, the projector is controlled to stop projecting the gray code pattern, and the camera is controlled to collect the RGB image of the human head.
According to the embodiment, the Gray code patterns are thrown and collected in a plurality of preset directions, so that each angle of the head of a person is covered completely, and the problems of model leakage points and flying points are avoided.
S202, determining coordinate points of the human head in a three-dimensional space according to Gray code patterns in each preset direction;
in S202, the gray code pattern belongs to a time domain coding pattern, when the continuous gray code pattern is projected onto the surface of the object, the gray code pattern is deformed due to the depth change of the surface of the object, but the position number obtained by decoding the gray code corresponding to the position of the pixel point on the gray code pattern is unchanged, if the position number of a certain pixel point on the pattern before projection is the ith row and the jth column, the gray code pattern containing the gray code a corresponding to the position number is obtained after encoding, the gray code pattern is projected onto the head, the gray code pattern is deformed on the head due to the concave-convex of the head, the camera acquires the deformed gray code pattern (i.e. the gray code pattern acquired by the camera is different from the gray code pattern before projection), and the position number ith row and the jth column corresponding to the gray code a are obtained after decoding the deformed gray code pattern, so that the position of the pixel point with the position number being the ith row and the jth column can be determined on the deformed gray code pattern. Therefore, coordinate points of the corresponding pixel points in the three-dimensional space are determined according to gray code deformation conditions of two or more gray code patterns.
Alternatively, for each preset direction, the gray code pattern collected by one image capturing apparatus is taken as a first gray code pattern, the gray code pattern collected by another image capturing apparatus is taken as a second gray code pattern, and S202 specifically includes S2021 to S2023:
s2021, determining each homonymous pixel point group corresponding to each preset direction according to a first Gray code pattern and a second Gray code pattern in each preset direction, wherein each homonymous pixel point group comprises a first pixel point positioned in the first Gray code pattern and a second pixel point positioned in the second Gray code pattern;
s2022, for each pixel point group with the same name in each preset direction, determining an intersection point of a first connecting line and a second connecting line in a three-dimensional space, wherein the first connecting line is a connecting line between an optical center of an image pickup device for collecting a first Gray code pattern and a first pixel point in the pixel point group with the same name, and the second connecting line is a connecting line between an optical center of an image pickup device for collecting a second Gray code pattern and a second pixel point in the pixel point group with the same name;
and S2023, taking each obtained intersection point as a coordinate point of the human head in the three-dimensional space.
In S2021 to S2023, the internal and external parameters of the camera and the respective position and orientation matrices of the camera may be corrected according to the Zhang Zhengyou calibration method, and the coordinate point may be determined according to the triangulation method. Specifically, according to the connection line between the optical center of one camera and the pixel point p in the imaging plane of the camera in the three-dimensional space, and the connection line between the optical center of the other camera and the pixel point q in the imaging plane of the camera, intersecting the two connection lines in the three-dimensional space or obtaining the coordinate point of the pixel point in the three-dimensional space by using a nearest intersection method, wherein the pixel point p and the pixel point q are homonymous pixel points, namely, the pixel point of the same light point projected by a projector is respectively captured by the two cameras. Further, according to each pixel point group with the same name, three-dimensional coordinate points corresponding to all the pixel points are obtained in an iterative mode, and therefore three-dimensional human head point clouds in each preset direction are formed.
Further, the above S2021 may further include S20211 to S20213:
s20211, for the first Gray code pattern and the second Gray code pattern in each preset direction, performing reverse coding on the first Gray code pattern and the second Gray code pattern to obtain a first position number of a pixel point on the first Gray code pattern and a second position number of the pixel point on the second Gray code pattern;
s20212, for each preset direction, matching the first position number with the second position number;
s20213, taking the pixel points with the same number and corresponding to the first position number and the second position number as the first pixel point and the second pixel point, and forming the same-name pixel point group in the preset direction.
In S20211 to S20213, assuming that the resolution of the projector is a×b, each pixel of the pattern projected by the projector is converted into gray code, for the pixel of the ith row and jth column, decimal i representing the number of rows is converted into n-bit binary form, decimal j representing the number of columns is converted into m into binary form, then n-bit binary form is converted into gray code form, and m-bit binary form is converted into gray code form, thereby obtaining gray code of all pixels of the pattern.
When gray code patterns are acquired by two cameras, the first gray code patterns acquired by the two cameras are different from the second gray code patterns due to the difference of the acquisition angles of the cameras, but gray codes on each pixel point of the gray code patterns are unchanged, so that the gray codes of each pixel point in the gray code patterns are decoded into position numbers through reverse coding, and the position numbers are positions of the pixel points on the gray code patterns projected by a projector. When the position number on the first Gray code pattern is consistent with the position number on the second Gray code pattern, the pixel points on the first Gray code pattern and the second Gray code pattern corresponding to the position number are the same-name pixel points.
S203, according to the coordinate points, forming a human head point cloud in each preset direction.
In S203, according to the neighborhood relation of the coordinate points, the adjacent three coordinate points are connected to construct a triangular surface, so as to obtain a human head point cloud corresponding to each preset direction.
Fig. 3 is a schematic flow chart of another modeling method of a three-dimensional human head according to an embodiment of the present application. It should be noted that the steps identical to those of the embodiment of fig. 1 and 2 are not repeated here.
In one possible implementation manner, S102 specifically includes S301 to S303:
s301, establishing a space body surrounding all the head point clouds in each preset direction, and dividing the space body into a plurality of subspace bodies with preset sizes;
s302, if a plurality of coordinate points of the human head point cloud exist in the subspace body, calculating a mean coordinate point of the plurality of coordinate points in the subspace body, and taking the mean coordinate point as the coordinate point of the subspace body;
s303, updating the structural surface in each preset direction according to the neighborhood relation of the mean coordinate points;
and S304, connecting coordinate points of the structural surface in each preset direction to obtain an initial human head model of the target user.
In S301 to S304, the space body may be a cube, a sphere, a cylinder, or the like, and is preferably a cube, so that it is more convenient to divide the space body into subspace bodies having uniform sizes. Specifically, defining a cube in a world coordinate system, and cutting the cube into a plurality of small cubes with the same size according to a preset resolution; and detecting the number of coordinate points in the small cubes, if a plurality of coordinate points exist in one small cube, calculating the average coordinate point of the plurality of coordinate points, taking the average coordinate point as the coordinate point in the small cube, and updating the neighborhood relation of the coordinate points to update the triangular surface index (namely the structural surface) of the model.
Optionally, since the position orientations of the acquisition assemblies in the preset direction are relatively fixed, the coordinate positions of the human head point clouds obtained by the acquisition assemblies are aligned, and operations such as translational rotation and the like are not needed, so that the calculation amount of a computer is reduced. And processing the coincident coordinate points or the approximately coincident coordinate points through the small cubes, wherein the approximately coincident coordinate points are coordinate points with the distance between the coordinate points smaller than a preset range value.
Optionally, the upper preset range value is taken as a preset size of the subspace body.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
Corresponding to the modeling method of the three-dimensional human head described in the above embodiments, fig. 6 shows a block diagram of a modeling apparatus 600 of the three-dimensional human head provided in the embodiment of the present application, and for convenience of explanation, only the portions related to the embodiments of the present application are shown.
Referring to fig. 6, the apparatus includes:
the obtaining module 601 is configured to obtain a configuration plane and an RGB image respectively corresponding to each of a plurality of preset directions of a target user, where the configuration plane is formed by a head point cloud of the target user, and the RGB image corresponding to each preset direction includes color values of coordinate points in the head point cloud corresponding to the preset direction;
the splicing module 602 is configured to splice the constituent surfaces in a plurality of preset directions to obtain an initial human head model of the target user;
and the filling module 603 is configured to perform color filling on the initial human head model according to color values corresponding to each coordinate point in the human head point cloud on the RGB image, so as to obtain a three-dimensional human head model of the target user.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
Fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 7, the terminal device 7 of this embodiment includes: at least one processor 70 (only one shown in fig. 7), a memory 71 and a computer program 72 stored in the memory 71 and executable on the at least one processor 70, the processor 70 implementing the steps in any of the method embodiments described above when executing the computer program 72.
The terminal device 7 may be a mobile phone, a desktop computer, a notebook computer, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 70, a memory 71. It will be appreciated by those skilled in the art that fig. 7 is merely an example of the terminal device 7 and is not limiting of the terminal device 7, and may include more or fewer components than shown, or may combine certain components, or different components, such as may also include input-output devices, network access devices, etc.
The processor 70 may be a central processing unit (Central Processing Unit, CPU) and the processor 70 may be other general purpose processors, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), an off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may in some embodiments be an internal storage unit of the terminal device 7, such as a hard disk or a memory of the terminal device 7. The memory 71 may in other embodiments also be an external storage device of the terminal device 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device 7. Further, the memory 71 may also include both an internal storage unit and an external storage device of the terminal device 7. The memory 71 is used for storing an operating system, application programs, boot loader (BootLoader), data, other programs, etc., such as program codes of the computer program. The memory 71 may also be used for temporarily storing data that has been output or is to be output.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps that may implement the various method embodiments described above.
Embodiments of the present application provide a computer program product which, when run on a mobile terminal, causes the mobile terminal to perform steps that may be performed in the various method embodiments described above.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing device/terminal apparatus, recording medium, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.
Claims (7)
1. A method for modeling a three-dimensional human head, comprising:
acquiring a construction surface and RGB images, wherein the construction surface and the RGB images respectively correspond to each other in a plurality of preset directions, the construction surface is formed by human head point clouds of the target user, and each RGB image corresponding to each preset direction contains color values of each coordinate point in the human head point clouds corresponding to the preset direction;
splicing the structural surfaces in the preset directions to obtain an initial human head model of the target user, wherein the method comprises the following steps: establishing a space body surrounding all the head point clouds in each preset direction, and dividing the space body into a plurality of subspace bodies with preset sizes; if a plurality of coordinate points of the human head point cloud exist in the subspace body, calculating a mean coordinate point of the plurality of coordinate points in the subspace body, and taking the mean coordinate point as the coordinate point of the subspace body; updating the structure surface in each preset direction according to the neighborhood relation of the mean coordinate points; connecting coordinate points of the structural surface in each preset direction to obtain an initial human head model of the target user; wherein the space body is a cube;
according to the color values corresponding to all coordinate points in the human head point cloud on the RGB image, performing color filling on the initial human head model to obtain a three-dimensional human head model of the target user;
wherein, for each preset direction, taking the Gray code pattern acquired by one image pickup device as a first Gray code pattern and taking the Gray code pattern acquired by the other image pickup device as a second Gray code pattern;
determining each homonymous pixel point group corresponding to each preset direction according to a first Gray code pattern and a second Gray code pattern in each preset direction, wherein each homonymous pixel point group comprises a first pixel point positioned in the first Gray code pattern and a second pixel point positioned in the second Gray code pattern;
for each homonymous pixel point group in each preset direction, determining an intersection point of a first connecting line and a second connecting line in a three-dimensional space, wherein the first connecting line is a connecting line between an optical center of the image pickup device for collecting the first Gray code pattern and the first pixel point in the homonymous pixel point group, and the second connecting line is a connecting line between an optical center of the image pickup device for collecting the second Gray code pattern and the second pixel point in the homonymous pixel point group;
taking each obtained intersection point as a coordinate point of the human head in a three-dimensional space;
the determining, for each first gray code pattern and second gray code pattern in the preset direction, each pixel group with the same name corresponding to the preset direction includes:
reversely encoding the first Gray code pattern and the second Gray code pattern according to the first Gray code pattern and the second Gray code pattern in each preset direction to obtain a first position number of a pixel point on the first Gray code pattern and a second position number of the pixel point on the second Gray code pattern;
for each preset direction, matching the first position number with the second position number;
and taking the pixel points with the same numbers and corresponding to the first position numbers and the second position numbers as the first pixel points and the second pixel points respectively to form a pixel point group with the same name in the preset direction.
2. The modeling method as defined in claim 1, wherein the obtaining the respective corresponding constituent surfaces and the respective corresponding RGB images of the target user in the plurality of preset directions includes:
acquiring human head point clouds and RGB images which respectively correspond to a target user in a plurality of preset directions, wherein the human head point clouds comprise a plurality of coordinate points with neighborhood relations;
and constructing a plurality of construction surfaces corresponding to each preset direction according to the neighborhood relation of a plurality of coordinate points of the human head point cloud in each preset direction.
3. The modeling method according to claim 2, wherein the obtaining the head point clouds and the RGB images respectively corresponding to the target user in the plurality of preset directions includes:
collecting Gray code patterns projected on the head of the target user from a plurality of preset directions and obtaining RGB images of the head of the target user;
determining coordinate points of the human head in a three-dimensional space according to the Gray code patterns in each preset direction;
and determining neighborhood relations of the coordinate points, and forming the coordinate points into human head point clouds in each preset direction.
4. A modeling method as claimed in claim 3, wherein said capturing gray code patterns projected on the head of the target user from a plurality of said preset directions and capturing RGB images of the head of the target user includes:
projecting a preset continuous gray code pattern from a plurality of preset directions to the head of the target user;
in each preset direction, respectively acquiring continuous Gray code patterns on the head of the target user through two image pickup devices, wherein the two image pickup devices are respectively arranged on two sides of the projection direction of the Gray code patterns;
and acquiring RGB images of the head of the target user in each preset direction.
5. A modeling apparatus for a three-dimensional human head, comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a construction surface and an RGB image which respectively correspond to a target user in a plurality of preset directions, wherein the construction surface is formed by a human head point cloud of the target user, and the RGB image corresponding to each preset direction contains color values of all coordinate points in the human head point cloud corresponding to the preset direction;
the splicing module is used for splicing the plurality of structural surfaces in the preset direction to obtain an initial human head model of the target user;
the filling module is used for carrying out color filling on the initial human head model according to the color values corresponding to all coordinate points in the human head point cloud on the RGB image to obtain a three-dimensional human head model of the target user;
the splicing module is specifically used for establishing a space body surrounding all the head point clouds in each preset direction and dividing the space body into a plurality of subspace bodies with preset sizes; if a plurality of coordinate points of the human head point cloud exist in the subspace body, calculating a mean coordinate point of the plurality of coordinate points in the subspace body, and taking the mean coordinate point as the coordinate point of the subspace body; updating the structure surface in each preset direction according to the neighborhood relation of the mean coordinate points; connecting coordinate points of the structural surface in each preset direction to obtain an initial human head model of the target user; wherein the space body is a cube;
wherein, for each preset direction, taking the Gray code pattern acquired by one image pickup device as a first Gray code pattern and taking the Gray code pattern acquired by the other image pickup device as a second Gray code pattern;
determining each homonymous pixel point group corresponding to each preset direction according to a first Gray code pattern and a second Gray code pattern in each preset direction, wherein each homonymous pixel point group comprises a first pixel point positioned in the first Gray code pattern and a second pixel point positioned in the second Gray code pattern;
for each homonymous pixel point group in each preset direction, determining an intersection point of a first connecting line and a second connecting line in a three-dimensional space, wherein the first connecting line is a connecting line between an optical center of the image pickup device for collecting the first Gray code pattern and the first pixel point in the homonymous pixel point group, and the second connecting line is a connecting line between an optical center of the image pickup device for collecting the second Gray code pattern and the second pixel point in the homonymous pixel point group;
taking each obtained intersection point as a coordinate point of the human head in a three-dimensional space;
the determining, for each first gray code pattern and second gray code pattern in the preset direction, each pixel group with the same name corresponding to the preset direction includes:
reversely encoding the first Gray code pattern and the second Gray code pattern according to the first Gray code pattern and the second Gray code pattern in each preset direction to obtain a first position number of a pixel point on the first Gray code pattern and a second position number of the pixel point on the second Gray code pattern;
for each preset direction, matching the first position number with the second position number;
and taking the pixel points with the same numbers and corresponding to the first position numbers and the second position numbers as the first pixel points and the second pixel points respectively to form a pixel point group with the same name in the preset direction.
6. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 4 when executing the computer program.
7. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010243821.3A CN111462309B (en) | 2020-03-31 | 2020-03-31 | Modeling method and device for three-dimensional head, terminal equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010243821.3A CN111462309B (en) | 2020-03-31 | 2020-03-31 | Modeling method and device for three-dimensional head, terminal equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111462309A CN111462309A (en) | 2020-07-28 |
CN111462309B true CN111462309B (en) | 2023-12-19 |
Family
ID=71683476
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010243821.3A Active CN111462309B (en) | 2020-03-31 | 2020-03-31 | Modeling method and device for three-dimensional head, terminal equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111462309B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106164979A (en) * | 2015-07-13 | 2016-11-23 | 深圳大学 | A kind of three-dimensional facial reconstruction method and system |
CN109697688A (en) * | 2017-10-20 | 2019-04-30 | 虹软科技股份有限公司 | A kind of method and apparatus for image procossing |
CN110443885A (en) * | 2019-07-18 | 2019-11-12 | 西北工业大学 | Three-dimensional number of people face model reconstruction method based on random facial image |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120176380A1 (en) * | 2011-01-11 | 2012-07-12 | Sen Wang | Forming 3d models using periodic illumination patterns |
-
2020
- 2020-03-31 CN CN202010243821.3A patent/CN111462309B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106164979A (en) * | 2015-07-13 | 2016-11-23 | 深圳大学 | A kind of three-dimensional facial reconstruction method and system |
CN109697688A (en) * | 2017-10-20 | 2019-04-30 | 虹软科技股份有限公司 | A kind of method and apparatus for image procossing |
CN110443885A (en) * | 2019-07-18 | 2019-11-12 | 西北工业大学 | Three-dimensional number of people face model reconstruction method based on random facial image |
Non-Patent Citations (2)
Title |
---|
刘绍堂著.《隧道变形监测与预测的理论与方法》.黄河水利出版社,2019,第48-51页. * |
朱险峰 ; 侯贺 ; 韩玉川 ; 白云瑞 ; 吴植文 ; .人体三维扫描结构光解相位方法研究.纳米技术与精密工程.(第03期),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN111462309A (en) | 2020-07-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111145238B (en) | Three-dimensional reconstruction method and device for monocular endoscopic image and terminal equipment | |
EP3951721A1 (en) | Method and apparatus for determining occluded area of virtual object, and terminal device | |
CN107223269B (en) | Three-dimensional scene positioning method and device | |
EP3614340B1 (en) | Methods and devices for acquiring 3d face, and computer readable storage media | |
WO2020207190A1 (en) | Three-dimensional information determination method, three-dimensional information determination device, and terminal apparatus | |
US10726580B2 (en) | Method and device for calibration | |
CN108288292A (en) | A kind of three-dimensional rebuilding method, device and equipment | |
WO2022021680A1 (en) | Method for reconstructing three-dimensional object by fusing structured light with photometry, and terminal device | |
CN113362446B (en) | Method and device for reconstructing object based on point cloud data | |
CN109754427A (en) | A kind of method and apparatus for calibration | |
CN113362445B (en) | Method and device for reconstructing object based on point cloud data | |
WO2023093739A1 (en) | Multi-view three-dimensional reconstruction method | |
CN112927306B (en) | Calibration method and device of shooting device and terminal equipment | |
CN109979013B (en) | Three-dimensional face mapping method and terminal equipment | |
CN112686950A (en) | Pose estimation method and device, terminal equipment and computer readable storage medium | |
CN113936099A (en) | Three-dimensional image reconstruction method and system based on monocular structured light and rotating platform | |
CN111709999A (en) | Calibration plate, camera calibration method and device, electronic equipment and camera system | |
CN115205383A (en) | Camera pose determination method and device, electronic equipment and storage medium | |
CN111383264A (en) | Positioning method, positioning device, terminal and computer storage medium | |
CN116415652A (en) | Data generation method and device, readable storage medium and terminal equipment | |
CN112102378B (en) | Image registration method, device, terminal equipment and computer readable storage medium | |
WO2022204953A1 (en) | Method and apparatus for determining pitch angle, and terminal device | |
CN111223139B (en) | Target positioning method and terminal equipment | |
CN111462309B (en) | Modeling method and device for three-dimensional head, terminal equipment and storage medium | |
CN111023994B (en) | Grating three-dimensional scanning method and system based on multiple measurement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |