Nothing Special   »   [go: up one dir, main page]

CN109472873A - Generation method, device, the hardware device of threedimensional model - Google Patents

Generation method, device, the hardware device of threedimensional model Download PDF

Info

Publication number
CN109472873A
CN109472873A CN201811303618.XA CN201811303618A CN109472873A CN 109472873 A CN109472873 A CN 109472873A CN 201811303618 A CN201811303618 A CN 201811303618A CN 109472873 A CN109472873 A CN 109472873A
Authority
CN
China
Prior art keywords
dimensional model
control
terminal equipment
generating
trigger signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811303618.XA
Other languages
Chinese (zh)
Other versions
CN109472873B (en
Inventor
陈曼仪
陈怡�
潘皓文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tiktok Technology Co ltd
Original Assignee
Beijing Microlive Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Microlive Vision Technology Co Ltd filed Critical Beijing Microlive Vision Technology Co Ltd
Priority to CN201811303618.XA priority Critical patent/CN109472873B/en
Publication of CN109472873A publication Critical patent/CN109472873A/en
Application granted granted Critical
Publication of CN109472873B publication Critical patent/CN109472873B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure discloses generation method, device, the hardware device of a kind of threedimensional model.Wherein, the generation method of the threedimensional model includes: that terminal device shows the first control;Terminal device receives the trigger signal of the first control, generates the first threedimensional model;Terminal device shows the second control;Terminal device receives the trigger signal of the second control, according to the amount of movement of terminal device and the first threedimensional model, generates the second threedimensional model.The generation method of the threedimensional model of the embodiment of the present disclosure directly can modify to the shape of threedimensional model by mobile terminal device based on basic threedimensional model, improve the flexibility and convenience of threedimensional model generation.

Description

Three-dimensional model generation method and device and hardware device
Technical Field
The present disclosure relates to the field of three-dimensional model generation, and in particular, to a method, an apparatus, and a hardware apparatus for generating a three-dimensional model.
Background
Augmented Reality (AR) is a technology for calculating the position and angle of a camera image in real time and adding corresponding images, videos and virtual objects, and aims to sleeve a virtual world on a screen in the real world and interact with the virtual world.
The method for realizing the augmented reality technology is to put a virtual object in a real scene, namely, a real environment and the virtual object are superposed on the same picture or space in real time. After the virtual object is overlaid, the virtual object moves according to a preset motion track, or the virtual object is controlled to perform a preset action through the control. A virtual object in augmented reality may typically be a three-dimensional model that has been previously created in a third-party creation tool and loaded into a real scene.
In the augmented reality technology, the three-dimensional model cannot be directly modified, and needs to be modified by a manufacturing tool, so that the method is complicated and inflexible.
Disclosure of Invention
According to one aspect of the present disclosure, the following technical solutions are provided:
a method of generating a three-dimensional model, comprising: the terminal equipment displays a first control; the terminal equipment receives a trigger signal of the first control and generates a first three-dimensional model; the terminal equipment displays a second control; and the terminal equipment receives the trigger signal of the second control and generates a second three-dimensional model according to the movement amount of the terminal equipment and the first three-dimensional model.
Further, the terminal device receives a trigger signal of the first control, and generates a first three-dimensional model, including: the terminal equipment receives a trigger signal of the first control and acquires an image of a real scene through an image sensor of the terminal equipment; the terminal equipment identifies a plane in the image; in response to identifying the plane, the terminal device generates a first three-dimensional model on the plane.
Further, in response to identifying the plane, the terminal device generates a first three-dimensional model on the plane, including: in response to identifying the plane, the terminal device displays a third control; and the terminal equipment receives a trigger signal of the third control and generates a first three-dimensional model on the plane.
Further, after the terminal device receives the trigger signal of the first control and generates the first three-dimensional model, the method further includes: the terminal equipment displays a fourth control; and the terminal equipment receives the trigger signal of the fourth control, shoots the screen picture of the terminal equipment and generates a picture or a video of the screen picture.
Further, after the terminal device receives the trigger signal of the fourth control and shoots the screen of the terminal device, the method further includes: the terminal equipment displays a sixth control; and the terminal equipment receives a trigger signal of the sixth control and edits the picture or the video of the screen picture.
Further, after the terminal device receives the trigger signal of the first control and generates the first three-dimensional model, the method further includes: the terminal equipment displays a fifth control; and the terminal equipment receives a trigger signal of a fifth control and generates a third three-dimensional model according to the first three-dimensional model or the second three-dimensional model.
Further, after the terminal device receives a trigger signal of a fifth control and generates a third three-dimensional model according to the first three-dimensional model or the second three-dimensional model, the method further includes: and the terminal equipment displays the first state of the fifth control.
Further, the terminal device displays a second control, including: the terminal equipment detects the distance between the terminal equipment and the first three-dimensional model; in response to the distance being within a first threshold, the terminal displaying a first state of a second control; in response to the distance being outside of the first threshold, the terminal displays a second state of a second control.
Further, the step of generating, by the terminal device, a second three-dimensional model according to the movement amount of the terminal device and the first three-dimensional model after the terminal device receives the trigger signal of the second control includes: the terminal equipment receives a trigger signal of a second control, detects the movement amount of the terminal equipment and analyzes the movement amount into a vertical movement component and a horizontal movement component; and the terminal equipment generates a second three-dimensional model according to the vertical movement component, the horizontal movement component and the first three-dimensional model.
Further, the terminal device generates a second three-dimensional model according to the vertical movement component, the horizontal movement component and the first three-dimensional model, and the method includes: and moving key points of the first three-dimensional model according to the vertical movement component and the horizontal movement component, and generating a second three-dimensional model according to the moved key points.
According to another aspect of the present disclosure, the following technical solutions are also provided:
an apparatus for generating a three-dimensional model, comprising:
the first control display module is used for displaying a first control by the terminal equipment;
the first model generation module is used for generating a first three-dimensional model when the terminal equipment receives a trigger signal of the first control;
the second control display module is used for displaying a second control by the terminal equipment;
and the second model generation module is used for generating a second three-dimensional model according to the movement amount of the terminal equipment and the first three-dimensional model when the terminal equipment receives the trigger signal of the second control.
Further, the first model generation module includes:
the image acquisition module is used for the terminal equipment to receive the trigger signal of the first control and acquire the image of the real scene through an image sensor of the terminal equipment;
the plane identification module is used for identifying a plane in the image by the terminal equipment;
and the first model generation submodule is used for responding to the identification of the plane, and the terminal equipment generates a first three-dimensional model on the plane.
Further, the first model generation submodule further includes:
the third control display module is used for responding to the identification of the plane and displaying a third control by the terminal equipment;
and the first model generation sub-module is used for generating a first three-dimensional model on the plane when the terminal equipment receives a trigger signal of the third control.
Further, the apparatus for generating a three-dimensional model further includes:
the fourth control display module is used for displaying a fourth control by the terminal equipment;
and the shooting module is used for shooting the screen picture of the terminal equipment after the terminal equipment receives the trigger signal of the fourth control so as to generate a picture or a video of the screen picture.
Further, the shooting module further includes:
the sixth control display module is used for displaying a sixth control by the terminal equipment;
and the editing module is used for the terminal equipment to receive the trigger signal of the sixth control and edit the picture or video of the screen picture.
Further, the apparatus for generating a three-dimensional model further includes:
the fifth control display module is used for displaying a fifth control by the terminal equipment;
and the third model generation module is used for generating a third three-dimensional model according to the first three-dimensional model or the second three-dimensional model after the terminal equipment receives a trigger signal of a fifth control.
Further, the apparatus for generating a three-dimensional model further includes:
and the fifth control state changing module is used for displaying the first state of the fifth control by the terminal equipment.
Further, the second control display module further includes:
the distance detection module is used for detecting the distance between the terminal equipment and the first three-dimensional model by the terminal equipment;
the second control state changing module is used for responding to the fact that the distance is within a first threshold value, and the terminal displays the first state of the second control; in response to the distance being outside of the first threshold, the terminal displays a second state of a second control.
Further, the second model generation module further includes:
the movement detection module is used for detecting the movement amount of the terminal equipment when the terminal equipment receives a trigger signal of a second control, and analyzing the movement amount into a vertical movement component and a horizontal movement component;
and the second model generation submodule is used for generating a second three-dimensional model by the terminal equipment according to the vertical movement component, the horizontal movement component and the first three-dimensional model.
Further, the second model generation submodule is further configured to:
and moving key points of the first three-dimensional model according to the vertical movement component and the horizontal movement component, and generating a second three-dimensional model according to the moved key points.
According to still another aspect of the present disclosure, there is also provided the following technical solution:
an electronic device, comprising: a memory for storing non-transitory computer readable instructions; and a processor for executing the computer readable instructions, so that the processor realizes the steps of any one of the three-dimensional model generation methods when executed.
According to still another aspect of the present disclosure, there is also provided the following technical solution:
a computer readable storage medium storing non-transitory computer readable instructions which, when executed by a computer, cause the computer to perform the steps of any of the methods described above.
The disclosure discloses a three-dimensional model generation method, a three-dimensional model generation device and a hardware device. The method for generating the three-dimensional model comprises the following steps: the terminal equipment displays a first control; the terminal equipment receives a trigger signal of the first control and generates a first three-dimensional model; the terminal equipment displays a second control; and the terminal equipment receives the trigger signal of the second control and generates a second three-dimensional model according to the movement amount of the terminal equipment and the first three-dimensional model. The method for generating the three-dimensional model can be used for directly modifying the shape of the three-dimensional model through the mobile terminal equipment based on the basic three-dimensional model, and improves the flexibility and convenience of generating the three-dimensional model.
The foregoing is a summary of the present disclosure, and for the purposes of promoting a clear understanding of the technical means of the present disclosure, the present disclosure may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
Drawings
FIG. 1 is a flow diagram of a method of generating a three-dimensional model according to one embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a method of calculating vertical and horizontal components of movement according to one embodiment of the present disclosure;
FIG. 3 is a schematic diagram of generating a second three-dimensional model by moving keypoints, according to one embodiment of the present disclosure;
4a-4e are schematic diagrams of an example of a method of generating a three-dimensional model according to one embodiment of the present disclosure;
FIG. 5 is a schematic diagram of an apparatus for generating a three-dimensional model according to one embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure in the specification. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. The disclosure may be embodied or carried out in various other specific embodiments, and various modifications and changes may be made in the details within the description without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present disclosure, and the drawings only show the components related to the present disclosure rather than the number, shape and size of the components in actual implementation, and the type, amount and ratio of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
The embodiment of the disclosure provides a method for generating a three-dimensional model. The method for generating the three-dimensional model provided by the embodiment can be executed by a computing device, the computing device can be implemented as software, or implemented as a combination of software and hardware, and the computing device can be integrally arranged in a server, a terminal device and the like. As shown in fig. 1, the method for generating a three-dimensional model mainly includes the following steps S101 to S104. Wherein:
step S101: the terminal equipment displays a first control;
in this step, the terminal device may be a mobile terminal device with a display and an image sensor, and the terminal device may typically be a smart phone, a tablet computer, a personal digital assistant, or the like. The first control can be any form of control such as a virtual button, a slider and the like. Typically, the terminal device is a smart phone, which includes a touch screen, and a virtual button is displayed on the touch screen as a first control, where the first control may be located at any position on a screen of the terminal device.
Step S102: the terminal equipment receives a trigger signal of the first control and generates a first three-dimensional model;
in this embodiment, the first three-dimensional model is a preset three-dimensional model, the preset three-dimensional model may include a plurality of different styles or types, and a user may select a three-dimensional model to be displayed from the plurality of preset three-dimensional models or randomly display a three-dimensional model.
In one embodiment, when an image sensor of the terminal device is turned on, an image in a real scene is acquired through the image sensor, the image includes a plane displaying the scene, and the plane may include a plane in a desktop, a ground, a wall, or other various real scenes. In a specific example of the embodiment, a user opens a rear camera of the smart phone, the rear camera collects an image and identifies a plane in a current scene, when a desktop in the current scene is scanned, a preset three-dimensional vase is generated on the desktop in the image, and the desktop and the three-dimensional vase are displayed on a display screen of the smart phone.
In one embodiment, in response to identifying the plane, the terminal device displays a third control; and the terminal equipment receives a trigger signal of the third control and generates a first three-dimensional model on the plane. The third control may be any form of control. In a specific example of the embodiment, a user opens a rear camera of a smart phone, the rear camera collects an image and identifies a plane in a current scene, when a desktop in the current scene is scanned, a terminal device displays a placement button on a screen, and after the user clicks the button, a preset three-dimensional vase is generated on the desktop in the image, and the desktop and the three-dimensional vase are displayed on a display screen of the smart phone.
In one embodiment, in response to identifying the plane, reading a configuration file of the first three-dimensional model; and generating the first three-dimensional model on the plane according to the three-dimensional model configuration parameters in the configuration file. In this embodiment, each preset first three-dimensional model is described by a set of configuration parameters, the configuration parameters are saved in the configuration file, when a plane is scanned, the configuration file of the preset three-dimensional model is read, the configuration parameters of the preset three-dimensional model are obtained, and the first three-dimensional model is rendered on the terminal according to the configuration parameters. Typical configuration parameters include: coordinates of feature points of the three-dimensional model, color of the three-dimensional model, material of the three-dimensional model, and the like, a default position of the three-dimensional model. It is understood that the configuration parameters in the configuration file are only examples, and do not limit the present disclosure, and any configuration parameters that can configure a three-dimensional model may be applied in the technical solution of the present disclosure.
Step S103: the terminal equipment displays a second control;
in this step, the terminal device displays a second control on the screen, and the second control is displayed together with the first three-dimensional model.
In one embodiment, the second control comprises two states, an activated state and an inactivated state, the states being determined by the distance of the terminal device from the first three-dimensional model. Detecting the distance between the terminal equipment and the first three-dimensional model by the terminal equipment, and responding to the fact that the distance is within a first threshold value, and displaying a first state of a second control by the terminal; in response to the distance being outside of the first threshold, the terminal displays a second state of a second control. The method comprises the steps of detecting the distance between the terminal device and the first three-dimensional model, making a perpendicular line to a plane where the terminal device is located through a center point of the terminal device, wherein the perpendicular line and the first three-dimensional model form an intersection point, the distance between the center point and the intersection point is used as the distance between the terminal device and the first three-dimensional model, when the distance is within a preset threshold range, the first three-dimensional model can be operated, setting the first control to be in an activated state, when the distance is outside the preset threshold range, the first three-dimensional model cannot be operated, and setting the first control to be in an inactivated state. It is understood that the terminal device and the first three-dimensional model are in the same world coordinate system, and the distance from the central point to the intersection point can be calculated through coordinates in the world coordinate system. In an embodiment, the area occupied by the first three-dimensional model in the terminal device display apparatus may be detected, when the area is within a preset threshold range, it indicates that the first three-dimensional model may be operated, and the first control is set to an activated state, and when the area is outside the preset threshold range, it indicates that the first three-dimensional model may not be operated, and the first control is set to an inactivated state.
Step S104: and the terminal equipment receives the trigger signal of the second control and generates a second three-dimensional model according to the movement amount of the terminal equipment and the first three-dimensional model.
In this embodiment, a user may hold the terminal device to move, when the terminal device detects movement, a movement amount of the terminal device may be recorded, a generation parameter of the second three-dimensional model may be obtained according to the movement amount, and the second three-dimensional model may be generated based on the first three-dimensional model according to the generation parameter. Further, the movement may be resolved into a movement component in the vertical direction and a movement component in the horizontal direction. The detected movement of the terminal device can be detected by using an acceleration sensor carried in the terminal device, typical acceleration sensors are a gyroscope, a gravity sensor and the like, an image sensor of the terminal device can also be used for collecting images, the movement of the terminal device is detected according to the change in the images, and the vertical direction and the horizontal direction refer to the vertical direction and the horizontal direction of the plane where the terminal device is located; or a specific signal may be used to determine the start and end points of the terminal device.
In one embodiment, when the movement of the terminal device is detected, the direction and the distance of the movement of the mobile terminal are determined, the direction of the movement can be represented by an included angle between a connecting line of an original position and a moved position of the terminal device and a horizontal direction, the distance of the movement can be represented by a length of a connecting line of the original position and the moved position of the terminal device, and the movement distance of the movement in a vertical direction and the movement distance of the movement in the horizontal direction can be calculated respectively by using the included angle and the length. Specifically, as shown in fig. 2, when the terminal device moves from point a to point B, and the angle between AB and the horizontal direction is θ, and a perpendicular line is drawn from point B to the vertical axis and the horizontal axis, the intersection points are B1 and B2, respectively, AB1 and AB2 are the component of AB in the vertical direction and the component of AB in the horizontal direction, respectively, and AB1 ═ sin θ AB and AB2 ═ cos θ AB can be calculated.
In one embodiment, the determining the direction and distance of movement in response to detecting the movement of the terminal device comprises: in response to detecting the trigger signal, determining a starting point of the movement; in response to detecting the disappearance of the trigger signal, determining an endpoint of the movement; and determining the moving direction and distance according to the starting point and the end point. In this embodiment, a trigger signal is included, the trigger signal determines a start point and an end point of movement of the terminal device, and typically, a trigger control may be set on the terminal device, for example, a virtual button is set on a touch display screen of a smart phone, when a user continuously presses the virtual button and does not drop, the current position of the terminal device is determined to be the start point of movement, after the user releases the virtual button, the trigger signal disappears, the current position of the terminal device is determined to be the end point position, an included angle between a connecting line between the start point position and the end point position and a horizontal direction is taken as a movement direction, and a length of the connecting line between the start point position and the end point position is taken as a movement distance.
In one embodiment, the height of the first three-dimensional model is changed according to the vertical movement component, and the width of the first three-dimensional model is changed according to the horizontal movement component, thereby generating a second three-dimensional model.
In one embodiment, keypoints of the first three-dimensional model are moved according to the vertical movement component and the horizontal movement component, and a second three-dimensional model is generated according to the keypoints after the movement. Specifically, after the first three-dimensional model is generated, a perpendicular line is drawn from a center point of the terminal device to a plane where the terminal device is located, and the perpendicular line and the first three-dimensional model form an intersection point, the intersection point is an operation point, a key point of the first three-dimensional model closest to the intersection point is determined, the key point is moved by a distance of a component in the vertical direction, the key point is moved by a distance of a component in the horizontal direction, and the second three-dimensional model is generated according to the moved position of the key point.
In another embodiment, after the key point is determined, a contour curve of the first three-dimensional model where the key point is located is determined according to the key point, the key point is moved by a distance of the component of the vertical direction in the vertical direction, the key point is moved by a distance of the component of the horizontal direction in the horizontal direction, a new contour curve is generated according to the key point after the movement, and the new contour curve is rotated around the central axis of the first three-dimensional model to generate the second three-dimensional model. In this embodiment, a typical scenario is a pottery art making scenario, the first three-dimensional model is a pottery blank of a pottery jar, when a user moves a key point on the pottery blank through a smart phone, the pottery blank is stretched or extruded according to the moving distance and direction of the key point, and the stretching and extrusion are applied to the whole three-dimensional pottery blank through the rotation of the pottery blank to form a new pottery blank, so as to complete the pottery blank making process. In this embodiment, the contour curve may be a spline curve generated from a plurality of keypoints on the three-dimensional model. Fig. 3 is an example of the embodiment, wherein the point C is a key point on the first three-dimensional model, L is a central axis of the first three-dimensional model, and the first three-dimensional model is a cylinder shown by a dotted line; when the user moves the terminal device, the point C moves along with the movement of the terminal device, for simplicity, taking the example that the movement only includes a horizontal component, after the distance of the horizontal movement of the terminal device is calculated, the point C moves horizontally to the point C1, at this time, the contour curve where the point C1 is located, that is, the generatrix of the second three-dimensional model, which is a straight line, is recalculated, and the second three-dimensional model, that is, the cylinder shown by the solid line in fig. 3, is generated according to the generatrix.
In one embodiment, the method for generating the three-dimensional model further comprises: in response to detecting a first position selection signal, moving either the first three-dimensional model or the second three-dimensional model to the first position. In this embodiment, the user may select a point outside any one of the first three-dimensional model or the second three-dimensional model area, such as another point on the plane where the first three-dimensional model is located or a point on another plane, and then move the first three-dimensional model to the point, and adjust the state of the three-dimensional model according to the plane state of the new point, which may be the angle between the plane and the horizontal direction. This step may be performed at any step after step S102, such as after the first three-dimensional model is generated, after the first three-dimensional model is changed, and after the second three-dimensional model is generated, and the present disclosure is not particularly limited.
In one embodiment, after step S102, the method may further include: the terminal equipment displays a fourth control; and the terminal equipment receives the trigger signal of the fourth control, shoots the screen picture of the terminal equipment and generates a picture or a video of the screen picture. In this step, the fourth control is used to trigger shooting, where the shooting may generate a picture or a video of a screen picture, so as to implement a screen capture or recording function. In this embodiment, triggering the fourth control may shoot the generation process of the three-dimensional model.
In one embodiment, after the terminal device receives the trigger signal of the fourth control and shoots a screen of the terminal device, the method further includes: a sixth control of the terminal device; and the terminal equipment receives a trigger signal of the sixth control and edits the picture or the video of the screen picture. In this embodiment, after the shooting is completed, the terminal device may display editing controls to advanced edit the shot pictures and videos, including but not limited to: editing background music, adding special effects, adding stickers, adding filters and the like.
In one embodiment, after step S102, the method may further include: the terminal equipment displays a fifth control; and the terminal equipment receives a trigger signal of a fifth control and generates a third three-dimensional model according to the first three-dimensional model or the second three-dimensional model. In this embodiment, the fifth control may control the property of the three-dimensional model to change without changing the shape, size, and the like of the three-dimensional model, for example, the fifth control may change the texture and/or material of the three-dimensional model, the specific first three-dimensional model may be a pottery blank of a pottery jar, and the fifth control may be a firing button, and after the user clicks the firing button, the material of the pottery blank is replaced with the material of a finished pottery jar. Of course, the fifth control may change other attributes of the three-dimensional model, and is not limited to texture and material, and will not be described herein again. In this embodiment, the state of the fifth control may be changed after the fifth control is triggered, as in the specific example above, after the burn button is triggered, the burn button may be displayed in gray to indicate that the burn has been completed and cannot be triggered again.
As shown in fig. 4a-4e, is a specific example of an embodiment of the present disclosure. As shown in fig. 4a, an interface is displayed for a screen of a terminal device, and the terminal device displays a first control, in this specific example, the first control is a Create button (Create); as shown in fig. 4b, when the user clicks the create button, a process of plane scanning (Find a desk and scan it) is entered, in which the user can scan the plane where the three-dimensional model needs to be placed through the mobile terminal device; as shown in fig. 4c, after the plane is recognized, the terminal device displays a third control, in this specific example, the third control is an arrow-shaped placing button and displays a prompt word (Tap to place object), and when the user clicks the placing button, a first three-dimensional model is generated, as shown in fig. 4d, in this specific example, the first three-dimensional model is a pottery blank of a pottery pot, and the terminal device displays a second control, in this specific example, the second control is a button in a shape of a human hand; as shown in fig. 4e, after the mobile terminal device is pressed by the hand-shaped button, the first three-dimensional model is changed to generate a second three-dimensional model, in this specific example, the shape of the mouth and the belly of the pottery jar is changed to be more like a flower vase, the initial shape of the pottery blank is the first three-dimensional model, and the shape after the shape of the mouth and the belly is changed is the second three-dimensional model.
The disclosure discloses a three-dimensional model generation method, a three-dimensional model generation device and a hardware device. The method for generating the three-dimensional model comprises the following steps: the terminal equipment displays a first control; the terminal equipment receives a trigger signal of the first control and generates a first three-dimensional model; the terminal equipment displays a second control; and the terminal equipment receives the trigger signal of the second control and generates a second three-dimensional model according to the movement amount of the terminal equipment and the first three-dimensional model. The method for generating the three-dimensional model can be used for directly modifying the shape of the three-dimensional model through the mobile terminal equipment based on the basic three-dimensional model, and improves the flexibility and convenience of generating the three-dimensional model.
In the above, although the steps in the above method embodiments are described in the above sequence, it should be clear to those skilled in the art that the steps in the embodiments of the present disclosure are not necessarily performed in the above sequence, and may also be performed in other sequences such as reverse, parallel, and cross, and further, on the basis of the above steps, other steps may also be added by those skilled in the art, and these obvious modifications or equivalents should also be included in the protection scope of the present disclosure, and are not described herein again.
For convenience of description, only the relevant parts of the embodiments of the present disclosure are shown, and details of the specific techniques are not disclosed, please refer to the embodiments of the method of the present disclosure.
The embodiment of the disclosure provides a device for generating a three-dimensional model. The apparatus may perform the steps described in the above-described method embodiment of generating a three-dimensional model. As shown in fig. 5, the apparatus 500 mainly includes: a first control display module 501, a first model generation module 502, a second control display module 503, and a second model generation module 504. Wherein,
a first control display module 501, configured to display a first control on a terminal device;
a first model generation module 502, configured to receive a trigger signal of a first control and generate a first three-dimensional model;
a second control display module 503, configured to display a second control by the terminal device;
and a second model generation module 504, configured to generate a second three-dimensional model according to the movement amount of the terminal device and the first three-dimensional model when the terminal device receives the trigger signal of the second control.
Further, the first model generation module 502 includes:
the image acquisition module is used for the terminal equipment to receive the trigger signal of the first control and acquire the image of the real scene through an image sensor of the terminal equipment;
the plane identification module is used for identifying a plane in the image by the terminal equipment;
and the first model generation submodule is used for responding to the identification of the plane, and the terminal equipment generates a first three-dimensional model on the plane.
Further, the first model generation submodule further includes:
the third control display module is used for responding to the identification of the plane and displaying a third control by the terminal equipment;
and the first model generation sub-module is used for generating a first three-dimensional model on the plane when the terminal equipment receives a trigger signal of the third control.
Further, the apparatus 500 for generating a three-dimensional model further includes:
the fourth control display module is used for displaying a fourth control by the terminal equipment;
and the shooting module is used for shooting the screen picture of the terminal equipment after the terminal equipment receives the trigger signal of the fourth control so as to generate a picture or a video of the screen picture.
Further, the shooting module further includes:
the sixth control display module is used for displaying a sixth control by the terminal equipment;
and the editing module is used for the terminal equipment to receive the trigger signal of the sixth control and edit the picture or video of the screen picture.
Further, the apparatus 500 for generating a three-dimensional model further includes:
the fifth control display module is used for displaying a fifth control by the terminal equipment;
and the third model generation module is used for generating a third three-dimensional model according to the first three-dimensional model or the second three-dimensional model after the terminal equipment receives a trigger signal of a fifth control.
Further, the apparatus 500 for generating a three-dimensional model further includes:
and the fifth control state changing module is used for displaying the first state of the fifth control by the terminal equipment.
Further, the second control display module 503 further includes:
the distance detection module is used for detecting the distance between the terminal equipment and the first three-dimensional model by the terminal equipment;
the second control state changing module is used for responding to the fact that the distance is within a first threshold value, and the terminal displays the first state of the second control; in response to the distance being outside of the first threshold, the terminal displays a second state of a second control.
Further, the second model generation module 504 further includes:
the movement detection module is used for detecting the movement amount of the terminal equipment when the terminal equipment receives a trigger signal of a second control, and analyzing the movement amount into a vertical movement component and a horizontal movement component;
and the second model generation submodule is used for generating a second three-dimensional model by the terminal equipment according to the vertical movement component, the horizontal movement component and the first three-dimensional model.
Further, the second model generation submodule is further configured to:
and moving key points of the first three-dimensional model according to the vertical movement component and the horizontal movement component, and generating a second three-dimensional model according to the moved key points.
The apparatus shown in fig. 5 can perform the method of the embodiment shown in fig. 1, and reference may be made to the related description of the embodiment shown in fig. 1 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 1, and are not described herein again.
Referring now to FIG. 6, a block diagram of an electronic device 600 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring at least two internet protocol addresses; sending a node evaluation request comprising the at least two internet protocol addresses to node evaluation equipment, wherein the node evaluation equipment selects the internet protocol addresses from the at least two internet protocol addresses and returns the internet protocol addresses; receiving an internet protocol address returned by the node evaluation equipment; wherein the obtained internet protocol address indicates an edge node in the content distribution network.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a node evaluation request comprising at least two internet protocol addresses; selecting an internet protocol address from the at least two internet protocol addresses; returning the selected internet protocol address; wherein the received internet protocol address indicates an edge node in the content distribution network.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (13)

1. A method of generating a three-dimensional model, comprising:
the terminal equipment displays a first control;
the terminal equipment receives a trigger signal of the first control and generates a first three-dimensional model;
the terminal equipment displays a second control;
and the terminal equipment receives the trigger signal of the second control and generates a second three-dimensional model according to the movement amount of the terminal equipment and the first three-dimensional model.
2. The method for generating a three-dimensional model according to claim 1, wherein the terminal device receives a trigger signal of a first control and generates the first three-dimensional model, and the method comprises the following steps:
the terminal equipment receives a trigger signal of the first control and acquires an image of a real scene through an image sensor of the terminal equipment;
the terminal equipment identifies a plane in the image;
in response to identifying the plane, the terminal device generates a first three-dimensional model on the plane.
3. The method of generating a three-dimensional model of claim 2, wherein said generating a first three-dimensional model on said plane by a terminal device in response to identifying said plane comprises:
in response to identifying the plane, the terminal device displays a third control;
and the terminal equipment receives a trigger signal of the third control and generates a first three-dimensional model on the plane.
4. The method for generating a three-dimensional model according to claim 1, wherein after the terminal device receives the trigger signal of the first control and generates the first three-dimensional model, the method further comprises:
the terminal equipment displays a fourth control;
and the terminal equipment receives the trigger signal of the fourth control, shoots the screen picture of the terminal equipment and generates a picture or a video of the screen picture.
5. The method for generating a three-dimensional model according to claim 4, wherein after the terminal device receives the trigger signal of the fourth control and shoots the screen of the terminal device, the method further comprises:
the terminal equipment displays a sixth control;
and the terminal equipment receives a trigger signal of the sixth control and edits the picture or the video of the screen picture.
6. The method for generating a three-dimensional model according to claim 1, wherein after the terminal device receives the trigger signal of the first control and generates the first three-dimensional model, the method further comprises:
the terminal equipment displays a fifth control;
and the terminal equipment receives a trigger signal of a fifth control and generates a third three-dimensional model according to the first three-dimensional model or the second three-dimensional model.
7. The method for generating a three-dimensional model according to claim 6, wherein after the terminal device receives the trigger signal of the fifth control and generates a third three-dimensional model according to the first three-dimensional model or the second three-dimensional model, the method further comprises:
and the terminal equipment displays the first state of the fifth control.
8. The method for generating a three-dimensional model according to claim 1, wherein the terminal device displays a second control comprising:
the terminal equipment detects the distance between the terminal equipment and the first three-dimensional model;
in response to the distance being within a first threshold, the terminal displaying a first state of a second control;
in response to the distance being outside of the first threshold, the terminal displays a second state of a second control.
9. The method for generating a three-dimensional model according to claim 1, wherein the generating of the second three-dimensional model according to the movement amount of the terminal device and the first three-dimensional model after the terminal device receives the trigger signal of the second control comprises:
the terminal equipment receives a trigger signal of a second control, detects the movement amount of the terminal equipment and analyzes the movement amount into a vertical movement component and a horizontal movement component;
and the terminal equipment generates a second three-dimensional model according to the vertical movement component, the horizontal movement component and the first three-dimensional model.
10. The method for generating a three-dimensional model according to claim 9, wherein the terminal device generates a second three-dimensional model from the vertical movement component, the horizontal movement component, and the first three-dimensional model, including:
and moving key points of the first three-dimensional model according to the vertical movement component and the horizontal movement component, and generating a second three-dimensional model according to the moved key points.
11. An apparatus for generating a three-dimensional model, comprising:
the first control display module is used for displaying a first control by the terminal equipment;
the first model generation module is used for generating a first three-dimensional model when the terminal equipment receives a trigger signal of the first control;
the second control display module is used for displaying a second control by the terminal equipment;
and the second model generation module is used for generating a second three-dimensional model according to the movement amount of the terminal equipment and the first three-dimensional model when the terminal equipment receives the trigger signal of the second control.
12. An electronic device, comprising:
a memory for storing non-transitory computer readable instructions; and
a processor for executing the computer readable instructions such that the processor when executing performs the method of generating a three-dimensional model according to any one of claims 1-10.
13. A computer-readable storage medium storing non-transitory computer-readable instructions which, when executed by a computer, cause the computer to perform the method of generating a three-dimensional model of any one of claims 1-10.
CN201811303618.XA 2018-11-02 2018-11-02 Three-dimensional model generation method, device and hardware device Active CN109472873B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811303618.XA CN109472873B (en) 2018-11-02 2018-11-02 Three-dimensional model generation method, device and hardware device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811303618.XA CN109472873B (en) 2018-11-02 2018-11-02 Three-dimensional model generation method, device and hardware device

Publications (2)

Publication Number Publication Date
CN109472873A true CN109472873A (en) 2019-03-15
CN109472873B CN109472873B (en) 2023-09-19

Family

ID=65666713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811303618.XA Active CN109472873B (en) 2018-11-02 2018-11-02 Three-dimensional model generation method, device and hardware device

Country Status (1)

Country Link
CN (1) CN109472873B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110120101A (en) * 2019-04-30 2019-08-13 中国科学院自动化研究所 Cylindrical body augmented reality method, system, device based on 3D vision

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130187905A1 (en) * 2011-12-01 2013-07-25 Qualcomm Incorporated Methods and systems for capturing and moving 3d models and true-scale metadata of real world objects
KR20130101622A (en) * 2012-02-14 2013-09-16 구대근 Apparatus and system for 3 dimensional design using augmented reality and method for design evaluation
CN104282041A (en) * 2014-09-30 2015-01-14 小米科技有限责任公司 Three-dimensional modeling method and device
US20150221135A1 (en) * 2014-02-06 2015-08-06 Position Imaging, Inc. Virtual reality and augmented reality functionality for mobile devices
US20150286364A1 (en) * 2013-06-17 2015-10-08 Spreadtrum Communications (Shanghai) Co., Ltd. Editing method of the three-dimensional shopping platform display interface for users
CN105338391A (en) * 2015-12-11 2016-02-17 腾讯科技(深圳)有限公司 Intelligent television control method and mobile terminal
CN105493154A (en) * 2013-08-30 2016-04-13 高通股份有限公司 System and method for determining the extent of a plane in an augmented reality environment
CN105513137A (en) * 2014-09-23 2016-04-20 小米科技有限责任公司 Three dimensional model and scene creating method and apparatus based on mobile intelligent terminal
CN105974804A (en) * 2016-05-09 2016-09-28 北京小米移动软件有限公司 Method and device for controlling equipment
CN106373187A (en) * 2016-06-28 2017-02-01 上海交通大学 Two-dimensional image to three-dimensional scene realization method based on AR
CN106896940A (en) * 2017-02-28 2017-06-27 杭州乐见科技有限公司 Virtual objects are presented effect control method and device
CN107452074A (en) * 2017-07-31 2017-12-08 上海联影医疗科技有限公司 A kind of image processing method and system
CN107622524A (en) * 2017-09-29 2018-01-23 百度在线网络技术(北京)有限公司 Display methods and display device for mobile terminal
CN107945285A (en) * 2017-10-11 2018-04-20 浙江慧脑信息科技有限公司 A kind of threedimensional model is exchanged cards containing all personal details and become sworn brothers figure and deformation method
CN108273265A (en) * 2017-01-25 2018-07-13 网易(杭州)网络有限公司 The display methods and device of virtual objects

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130187905A1 (en) * 2011-12-01 2013-07-25 Qualcomm Incorporated Methods and systems for capturing and moving 3d models and true-scale metadata of real world objects
KR20130101622A (en) * 2012-02-14 2013-09-16 구대근 Apparatus and system for 3 dimensional design using augmented reality and method for design evaluation
US20150286364A1 (en) * 2013-06-17 2015-10-08 Spreadtrum Communications (Shanghai) Co., Ltd. Editing method of the three-dimensional shopping platform display interface for users
CN105493154A (en) * 2013-08-30 2016-04-13 高通股份有限公司 System and method for determining the extent of a plane in an augmented reality environment
US20150221135A1 (en) * 2014-02-06 2015-08-06 Position Imaging, Inc. Virtual reality and augmented reality functionality for mobile devices
CN105513137A (en) * 2014-09-23 2016-04-20 小米科技有限责任公司 Three dimensional model and scene creating method and apparatus based on mobile intelligent terminal
CN104282041A (en) * 2014-09-30 2015-01-14 小米科技有限责任公司 Three-dimensional modeling method and device
CN105338391A (en) * 2015-12-11 2016-02-17 腾讯科技(深圳)有限公司 Intelligent television control method and mobile terminal
CN105974804A (en) * 2016-05-09 2016-09-28 北京小米移动软件有限公司 Method and device for controlling equipment
CN106373187A (en) * 2016-06-28 2017-02-01 上海交通大学 Two-dimensional image to three-dimensional scene realization method based on AR
CN108273265A (en) * 2017-01-25 2018-07-13 网易(杭州)网络有限公司 The display methods and device of virtual objects
CN106896940A (en) * 2017-02-28 2017-06-27 杭州乐见科技有限公司 Virtual objects are presented effect control method and device
CN107452074A (en) * 2017-07-31 2017-12-08 上海联影医疗科技有限公司 A kind of image processing method and system
CN107622524A (en) * 2017-09-29 2018-01-23 百度在线网络技术(北京)有限公司 Display methods and display device for mobile terminal
CN107945285A (en) * 2017-10-11 2018-04-20 浙江慧脑信息科技有限公司 A kind of threedimensional model is exchanged cards containing all personal details and become sworn brothers figure and deformation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王志宁;崔博;任炳昱;吴斌平;关涛;: "基于增强现实的堆石坝工程三维可视化场景构建研究", 水力发电, no. 05, pages 57 - 60 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110120101A (en) * 2019-04-30 2019-08-13 中国科学院自动化研究所 Cylindrical body augmented reality method, system, device based on 3D vision
CN110120101B (en) * 2019-04-30 2021-04-02 中国科学院自动化研究所 Cylinder augmented reality method, system and device based on three-dimensional vision

Also Published As

Publication number Publication date
CN109472873B (en) 2023-09-19

Similar Documents

Publication Publication Date Title
CN109582122B (en) Augmented reality information providing method and device and electronic equipment
CN112672185B (en) Augmented reality-based display method, device, equipment and storage medium
US11561651B2 (en) Virtual paintbrush implementing method and apparatus, and computer readable storage medium
US11880999B2 (en) Personalized scene image processing method, apparatus and storage medium
KR20230049691A (en) Video processing method, terminal and storage medium
CN112965780B (en) Image display method, device, equipment and medium
US20220159197A1 (en) Image special effect processing method and apparatus, and electronic device and computer readable storage medium
JP2023533295A (en) AUGMENTED REALITY IMAGE PROCESSING METHOD, APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
CN113806306B (en) Media file processing method, device, equipment, readable storage medium and product
CN112764845A (en) Video processing method and device, electronic equipment and computer readable storage medium
CN110070617B (en) Data synchronization method, device and hardware device
CN111352560B (en) Screen splitting method and device, electronic equipment and computer readable storage medium
CN113163135B (en) Animation adding method, device, equipment and medium for video
CN112906553B (en) Image processing method, apparatus, device and medium
CN114445600A (en) Method, device and equipment for displaying special effect prop and storage medium
CN114049403A (en) Multi-angle three-dimensional face reconstruction method and device and storage medium
CN109636917B (en) Three-dimensional model generation method, device and hardware device
WO2017024954A1 (en) Method and device for image display
CN109472873B (en) Three-dimensional model generation method, device and hardware device
CN111862342A (en) Texture processing method and device for augmented reality, electronic equipment and storage medium
CN115309317B (en) Media content acquisition method, apparatus, device, readable storage medium and product
CN110209861A (en) Image processing method, device, electronic equipment and computer readable storage medium
CN111200705B (en) Image processing method and device
CN111292276B (en) Image processing method and device
CN110070600B (en) Three-dimensional model generation method, device and hardware device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 100080 408, 4th floor, 51 Zhichun Road, Haidian District, Beijing

Patentee after: Tiktok Technology Co.,Ltd.

Country or region after: China

Address before: 100080 408, 4th floor, 51 Zhichun Road, Haidian District, Beijing

Patentee before: BEIJING MICROLIVE VISION TECHNOLOGY Co.,Ltd.

Country or region before: China