Nothing Special   »   [go: up one dir, main page]

CN109308741B - Meta 2-based natural interaction handicraft creative design system - Google Patents

Meta 2-based natural interaction handicraft creative design system Download PDF

Info

Publication number
CN109308741B
CN109308741B CN201810894635.9A CN201810894635A CN109308741B CN 109308741 B CN109308741 B CN 109308741B CN 201810894635 A CN201810894635 A CN 201810894635A CN 109308741 B CN109308741 B CN 109308741B
Authority
CN
China
Prior art keywords
model
user interface
graphical user
coordinates
meta2
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810894635.9A
Other languages
Chinese (zh)
Other versions
CN109308741A (en
Inventor
权巍
张超
李华
韩成
薛耀红
胡汉平
陈纯毅
蒋振刚
杨华民
冯欣
杨贺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun University of Science and Technology
Original Assignee
Changchun University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University of Science and Technology filed Critical Changchun University of Science and Technology
Priority to CN201810894635.9A priority Critical patent/CN109308741B/en
Publication of CN109308741A publication Critical patent/CN109308741A/en
Application granted granted Critical
Publication of CN109308741B publication Critical patent/CN109308741B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a Meta 2-based natural interaction artwork creative design system, which is characterized in that: the meta2 augmented reality glasses are connected with corresponding interfaces of the workstation through HDMI and USB 3.0; the specific steps when the meta2 augmented reality glasses are connected with the workstation and the human hand is in the natural interaction effective area are as follows: step 1: creating a system main scene comprising a graphical user interface and a model editing area, and step 2: supporting the model sculpture and model combination operation in the handicraft design process, and step 3: processing of model combinations, step 4: and the model export operation after the handicraft design is finished is supported. The gesture recognition precision is high, and the operation is simple; the creative design inspiration is facilitated, and the appearance of excellent creative design products is promoted.

Description

Meta 2-based natural interaction artwork creative design system
Technical Field
The invention relates to a Meta 2-based natural interaction artwork creative design system, and belongs to the field of augmented reality.
Background
The creative design is a process of extending and interpreting creative ideas and concepts in a design manner. Inspiration is critical to the creation of a well designed product. In the design process, people can bring unexpected innovation and originality by capturing sudden and evasive inspiration in time. Therefore, the moment of inspiration generation is mastered, the design object is timely controlled to generate real feedback, the design can be effectively stimulated, a new thought is brought to the design, and the generation of excellent design products is facilitated.
At present, computer aided design tools (such as CAD of the great famous tripod) which can be used are available in various fields, so that the design efficiency can be improved, and good design products can be produced to a certain extent. However, the existing tools mainly complete the design work in a mode of drawing a two-dimensional drawing or operating a three-dimensional model in a computer through input equipment such as a mouse and a keyboard. The operation of the tool is complex when the tool completes the conversion of the design thinking to the drawing or the model, so that a designer can not obtain real-time and real effect feedback in the design process, and the generation of design idea and inspiration is limited.
Disclosure of Invention
The invention aims to provide a natural interaction artwork creative design system based on Meta2, which is based on an augmented reality technology, does not need input equipment such as a mouse, a keyboard and the like in the creative design process, and completes design in a manual natural interaction mode; in addition, a three-dimensional design object with a sense of reality is directly operated, and a design effect is fed back in real time; the method can greatly promote inspiration capture and design process implementation, and help designers to quickly create more excellent creative design products.
In order to achieve the purpose, the technical scheme of the invention is realized as follows: a Meta 2-based natural interactive artwork creative design system, comprising: workstation, meta2 augmented reality glasses, gesture input, natural interactive active area; the method is characterized in that: the meta2 augmented reality glasses are connected with corresponding interfaces of the workstation through HDMI and USB 3.0; the specific steps when the meta2 augmented reality glasses are connected with the workstation and the human hand is in the natural interaction effective area are as follows:
step 1: creating a main scene of the system, which comprises a graphical user interface and a model editing area, wherein the graphical user interface and the model editing area are both in a natural interaction effective area, and the natural interaction effective area of the system is defined as an area with a horizontal visual angle of 88 degrees in front of the helmet and a depth of 0.35-0.70 m; creating a graphical user interface in the natural interaction effective area, and creating a basic canvas by using a canvas system in the SDK provided by Meta2 augmented reality glasses; adding a model selection button, a model picture, a model rotation sliding bar and a model magnification and reduction sliding bar element in the canvas; creating a blank model editing area in the natural interaction effective area, so that a model selected by a user can be generated in the area conveniently;
step 2: the model sculpture and model combination operation method in the handicraft design process is supported, wherein the processing step of the model sculpture comprises the following substeps:
step 201: finishing the identification of finger tips based on hand data captured by Meta2 augmented reality glasses, acquiring coordinates of the left finger tip and the right finger tip under a world coordinate system in real time through a position tracking sensor on the Meta2 augmented reality glasses, and respectively recording the coordinates as LTop (x) tl ,y tl ,z tl ) And RTop (x) tr ,y tr ,z tr );
Step 202: monitoring a user fingertip position; when a user clicks a model picture of the graphical user interface through a fingertip and the fingertip reaches a trigger area of a canvas system of the graphical user interface, triggering a click event; finding the prefabricated body resource of the model selected by the user, and generating the model into the position of a world coordinate system (0.03, -0.03, 0.4), namely a model editing area through an Instantiate method, wherein the model is marked as a model M (v) l1 ,v l2 ,…,v ln ) (hereinafter abbreviated as model M), wherein v l1 ,v l2 ,…,v ln The coordinates of the grid vertex of the model M under the local coordinate system of the model M are set;
step 203: the model M is controlled to rotate by taking the vertical direction of the model M as an axis through a transform. Defining the numerical range of the sliding strip to be 1-360, and the rotating speed range of the controllable model to be (1/360) r/s-1 r/s;
step 204: controlling the amplification and the reduction of the model through the graphical user interface, sliding a model amplification and reduction sliding bar in the graphical user interface through a fingertip by a user, and controlling the amplification and the reduction of the model M through transform. Defining the numerical range of the sliding strip to be 1-2, and controlling the size of the model to change within the range from 1 time to 2 times of the size of the model;
step 205: define two spherical colliders with the left and right finger tips LTop (x), respectively tl ,y tl ,z tl ) And RTop (x) tr ,y tr ,z tr ) Binding is carried out;
step 206: defining a deformation script and loading the deformation script to the model M, specifically, adding a grid collision device on the model M and monitoring the occurrence of a collision event; when the spherical collider bound with the finger fingertip position contacts the surface of the model M, the collider and the grid collider of the model M generate a collision event, and after the deformation script monitors the collision event, the coordinates (v) of the grid vertex of the model M under a local coordinate system of the model M are detected l1 ,v 12 ,…,v ln ) Conversion to coordinates (v) in the world coordinate System w1 ,v w2 ,…,v wn ) (ii) a The coordinates of the collision point in the world coordinate system are marked as (v) wi ,v w(i+1) ,…,v wj ) (1 ≦ i ≦ j ≦ n), and the normal vector of the collision point is denoted as (c) wi ,c w(i+1) ,…,c wj ) (i is more than or equal to 1 and less than or equal to j and less than or equal to n), according to the collision detection method, the grid vertex position of the model M at the collision point can be changed through the following formula:
(v′ wi ,v′ w(i+1) ,…,v′ wj )=(v wi ,v w(i+1) ,…,v wj )+((c wi, c w(i+1) ,...,c wj )×(d×f))(1≤i≤j≤n)
wherein d is the change direction of the model M grid vertex, f is the change intensity, (v' wi ,v′ w(i+1) ,…,v′ wj ) For modifying displaced mouldsCoordinates of the grid vertex of the model M under a world coordinate system;
(v ') converting the grid vertex coordinates of the model M from the world coordinate system to the local coordinate system thereof to obtain' l1 ,v′ l2 ,…,v′ ln ) From the coordinates, the vertex normal is recalculated by the mesh.RecalculateteNormals method to obtain a deformed model M, which is denoted as M '(v' l1 ,v′ l2 ,…,v′ ln ) (hereinafter abbreviated as model M') to realize the sculpturing effect of the model;
step 207: the user clicks the re-sculpting button in the graphical user interface through the fingertip to restore the model M' to the model M, and the model M can be re-sculptured by repeating the step 206;
and step 3: the processing step of the model combination comprises the following sub-steps:
step 301: repeating the steps 201 and 202, selecting a new model, and marking the selected model as a model S;
step 302: completing the identification of the palm based on the hand data captured by Meta 2; coordinates of the left hand and the right hand in a world coordinate system are obtained in real time through a position tracking sensor on Meta2 augmented reality glasses and are respectively marked as LPAlm (x) pl ,y pl ,z pl ) And RPalm (x) pr ,y pr ,z pr ). According to LTop (x) tl ,y tl ,z tl )、RTop(x tr ,y tr ,z tr )、LPalm(x pl ,y pl ,z pl )、RPalm(x pr ,y pr ,z pr ) Defines two hand gestures, respectively a grip (Grab) gesture and a Release (Release) gesture, taking the left hand as an example, when:
Figure BDA0001757812480000031
when the hand is in the gripping posture, otherwise, the hand is in the releasing posture, and the same principle is carried out by the right hand;
step 303: on the basis of grasping and releasing the gesture, a user interacts with a model in a main scene of the system in a natural interaction mode through two hands; the method comprises the steps that a GrabInteraction script, a TwoHanddGrabRotatInteraction script and a TwoHanddGrab-Scale interaction script provided by Meta2 augmented reality glasses are mounted on a model, and a user can realize the operations of moving, rotating, amplifying and reducing the model in a mode of interacting with the model by one hand or two hands;
step 304: moving the model M' and the model S to mutually attached or overlapped positions through moving operation, clicking a model combination button in a graphical user interface by a fingertip, and triggering detection; detecting whether other models (model S) exist in the trigger area of the model M'; if the model exists, other models (models S) are set as sub-objects of the model M ', and in the subsequent operation, the position and the rotation information of the model S are consistent with the model M ', and the size is enlarged and reduced along with the model M ', so that the effect of model combination is realized;
and 4, step 4: and the model export operation after the handicraft design is finished is supported. A user clicks a model export button in a graphical user interface through a fingertip to trigger a model export function; exporting the model of the model editing area into a 3D model file in an OBJ format and a material library file in an MTL format; and both files are saved in a local folder designated by a user, wherein an OBJ format file saves information such as grid vertex coordinates and vertex normals of the 3D model, and an MTL format file saves information such as RGB (red, green and blue) and textures of the 3D model, so that the model designed under the system can be opened and edited in other 3D modeling software or printed by using a 3D printer.
The invention has the positive effects that: all design behaviors are the imagination for the future, and the nature of the design is the imagination for the future. During design, the future imagination of the designed object can be presented in time, and the inspiration can be greatly stimulated; in addition, the object adjustment is designed quickly and conveniently, which are key factors for determining the generation of excellent creative designed products; the method is realized by using Meta2 augmented reality glasses based on natural interaction technology, non-rigid body simulation, multi-model fusion and other technologies, accords with the design habit of the real world, and has the characteristic of high efficiency of a computer aided design tool; the gesture recognition precision is high, and the operation is simple; the system can complete the whole design process and observe the design effect with sense of reality in real time; the creative design inspiration is facilitated, and the appearance of excellent creative design products is promoted.
Drawings
FIG. 1 is a schematic diagram of a meta 2-based creative design system for natural interactive artwork. Wherein, 1 is a workstation, 2 is Meta2 augmented reality glasses, 3 is gesture input, and 4 is a natural interaction effective area.
FIG. 2 is a schematic diagram of a model sculpture in the design of creative handicraft articles.
Fig. 3 is a schematic view of a hand gripping posture.
Fig. 4 is a schematic view of a hand release gesture.
Detailed Description
The embodiments of the present invention will be described in detail below with reference to the drawings and specific examples.
EXAMPLE 1 coffee cup design of
A natural interactive handicraft creative design system based on Meta2 is characterized in that a coffee cup is created in a specific mode according to the following steps:
step 1: and creating a system main scene comprising a graphical user interface and a model editing area. Creating a graphical user interface and a canvas in the natural interaction effective area; adding elements such as a model selection button, a cup model picture, a cup handle model picture, a cup model rotating sliding strip, a cup model enlarging and reducing sliding strip and the like into the canvas; a blank model editing area is created within the natural interaction active area to facilitate generation of the user-selected model in the area.
Step 2: the invention supports the operation methods of carving the cup body model, combining the cup body model and the cup handle and other models in the design process of the craft coffee cup. Wherein, the processing step of the cup body model sculpture comprises the following substeps:
step 201: and finishing the identification of the finger tips based on the hand data captured by the Meta2 augmented reality glasses. Obtaining coordinates of the left and right finger tips under a world coordinate system in real time through a position tracking sensor on the Meta2 augmented reality glasses, and respectively recording the coordinates as LTop (x) tl ,y tl ,z tl ) And RTop (x) tr ,y tr ,z tr );
Step 202: monitoring the position of a user fingertip; a user clicks a cup body model picture of the graphical user interface through a fingertip, and the fingertip reaches a trigger area of a canvas system of the graphical user interface to trigger a click event; finding the prefabricated resource of the cup body model selected by the user, generating the cup body model to the position (in the model editing area) of a world coordinate system (0.03, -0.03, 0.4) by an instant method, and marking the cup body model as a cup body model M (v) l1 ,v l2 ,…,v ln ) (hereinafter abbreviated as model M), wherein v l1 ,v l2 ,…,v ln The coordinates of the grid vertex of the cup body model M under the local coordinate system of the cup body model M are shown;
step 203: the rotation of the cup model M is controlled through a graphic user interface. A user slides a cup body model in a graphical user interface through a fingertip to rotate a sliding strip, and controls a cup body model M to rotate by taking the vertical direction of the cup body model M as an axis through a transform. The numerical range of the sliding strip is 1 to 360, and the rotating speed range of the controllable model is (1/360) r/s to 1r/s;
step 204: and controlling the enlargement and the reduction of the cup body model through a graphical user interface. A user slides a cup body model in a graphical user interface to enlarge and reduce a sliding strip through a fingertip, and controls the enlargement and reduction of the cup body model M through transform. The numerical range of the sliding strip is 1 to 2, and the size of the model can be controlled to change within the range from 1 time to 2 times of the size of the model;
step 205: define two spherical colliders with the left and right finger tips LTop (x), respectively tl ,y tl ,z tl ) And RTop (x) tr ,y tr ,z tr ) Binding is carried out;
step 206: defining a deformation script and mounting the deformation script on the cup body model M; specifically, a grid collider is added on the cup body model M, and the occurrence of a collision event is monitored; when a user contacts the surface of the cup body model M through the finger tip, the spherical collider bound with the finger tip position and the grid collider of the cup body model M generate a collision event, and after the deformation script monitors the collision event, the grid vertex of the cup body model M is positioned in a local coordinate system of the userCoordinates of lower (v) l1 ,v l2 ,…,v ln ) Conversion to coordinates (v) in the world coordinate System w1 ,v w2 ,…,v wn ) (ii) a According to the collision detection method, the coordinates (v) of the collision point in the world coordinate system are used wi ,v w(i+1) ,…,v wj ) (1. Ltoreq. I. Ltoreq. J. Ltoreq. N) and a normal vector (c) of the collision point wi ,c w(i+1) ,…,c wj ) (j is more than or equal to 1 and less than or equal to n) to obtain the coordinates (v 'of the grid vertex of the cup body model M under the world coordinate system after the position is changed' wi ,v′ w(i+1) ,…,v′ wj )。
Converting grid vertex coordinates of the cup body model M from a world coordinate system to a local coordinate system thereof to obtain (v' l1 ,v′ l2 ,…,v′ ln ) From the coordinates, the vertex normal is recalculated by the mesh. l1 ,v′ l2 ,…,v′ ln ) (hereinafter abbreviated as cup model M') to realize the sculpturing effect of the cup model, as shown in fig. 2;
step 207: repeating the steps 203, 204, 205 and 206 for multiple times to complete the design of the cup body; if the design work needs to be reset, a re-sculpturing button in the graphical user interface can be clicked through the fingertip, the cup body model M' is restored to the cup body model M, and the cup body model M is sculptured again;
and step 3: the processing step of combining the cup body model with other models such as a cup handle comprises the following substeps (taking the combination of the cup body model and the cup handle model as an example):
step 301: repeating the steps 201 and 202, selecting a cup handle model, and recording the selected model as a cup handle model S;
step 302: and completing the identification of the palm based on the hand data captured by the Meta2 augmented reality glasses. Acquiring coordinates of the left hand and the right hand in a world coordinate system in real time through a position tracking sensor on Meta2, and respectively marking as LPAlm (x) pl ,y pl ,z pl ) And RPalm (x) pr ,y pr ,z pr ). According to LTop (x) tl ,y tl ,z tl )、RTop(x tr ,y tr ,z tr )、LPalm(x pl ,y pl ,z pl )、RPalm(x pr ,y pr ,z pr ) The information of (3) defines two hand gestures, i.e. a grip (Grab) gesture and a Release (Release) gesture, as shown in fig. 3 and 4, taking the left hand as an example, when:
Figure BDA0001757812480000061
Figure BDA0001757812480000062
when the hand is in the gripping posture, otherwise, the hand is in the releasing posture, and the same principle is carried out by the right hand;
step 303: on the basis of grasping and releasing the gesture, the invention can realize that the user interacts with the model in the main scene of the system in a natural interaction mode through two hands. The method comprises the steps that a GrabInteraction script, a TwoHandGrabRotatInteractive script and a TwoHandGrab-ScaleInteraction script which are provided by Meta2 augmented reality glasses are mounted on a cup body model M 'and a cup handle model S, and a user can move, rotate, enlarge and reduce the cup body model M' or the cup handle model S in an interactive mode through one hand or two hands;
step 304: moving the cup body model M' and the cup handle model S to mutually attached or overlapped positions through moving operation, clicking a model combination button in a graphical user interface by a fingertip, and triggering detection; at the moment, the cup handle model S is arranged in the trigger area of the cup body model M ', the cup handle model S is set as a sub-object of the cup body model M', in the subsequent operation, the position and the rotation information of the cup handle model S are consistent with those of the cup body model M ', and the size of the cup handle model S is enlarged and reduced along with the cup body model M', so that the effect of combining the cup body model and the cup handle model is realized;
and 4, step 4: the invention supports the coffee cup model export operation method after the design of the craft coffee cup is finished. A user clicks a model export button in a graphical user interface through a fingertip to trigger a model export function; exporting the designed coffee cup model in the model editing area into a 3D model file in an OBJ format and a material library file in an MTL format. Both files are stored in a local folder designated by a user, wherein the OBJ format file stores information such as grid vertex coordinates and vertex normals of the 3D model, and the MTL format file stores information such as RGB and texture of the 3D model. So that the coffee cup model designed under the system can be edited in other 3D modeling software or printed by using a 3D printer.

Claims (1)

1. A Meta 2-based natural interactive artwork creative design system comprises: a workstation, meta2 augmented reality glasses, gesture input, and a natural interaction effective area; the method is characterized in that: the Meta2 augmented reality glasses are connected with corresponding interfaces of the workstation through the HDMI and the USB 3.0; the specific steps under the condition that the Meta2 augmented reality glasses are connected with the workstation and the hands of people are in a natural interaction effective area are as follows:
step 1: creating a main scene of the system, wherein the main scene comprises a graphical user interface and a model editing area, the graphical user interface and the model editing area are both in a natural interaction effective area, and the natural interaction effective area of the system is defined as an area with a horizontal visual angle of 88 degrees in front of the helmet and a depth of 0.35-0.70 m; creating a graphical user interface in the natural interaction effective area, and creating a basic canvas by using a canvas system in an SDK (software development kit) provided by Meta2 augmented reality glasses; adding a model selection button, a model picture, a model rotation sliding bar and a model magnification and reduction sliding bar element in the canvas; creating a blank model editing area in the natural interaction effective area, so that a model selected by a user can be generated in the area conveniently;
step 2: the model sculpture and model combination operation method in the handicraft design process is supported, wherein the processing step of the model sculpture comprises the following substeps:
step 201: finishing the identification of finger tips based on hand data captured by Meta2 augmented reality glasses, acquiring coordinates of the left finger tip and the right finger tip under a world coordinate system in real time through a position tracking sensor on the Meta2 augmented reality glasses, and respectively recording the coordinates as LTop (x) tl ,y tl ,z tl ) And RTop (x) tr ,y tr ,z tr );
Step 202: monitoring a userA fingertip position; when a user clicks a model picture of a graphical user interface through a fingertip and the fingertip reaches a trigger area of a canvas system of the graphical user interface, triggering a click event; finding the prefabricated body resource of the model selected by the user, and generating the model into the position of a world coordinate system (0.03, -0.03, 0.4), namely a model editing area by an lnstantiate method, wherein the model is marked as a model M (v) l1 ,v l2 ,…,v ln ) The following is abbreviated as model M, where v l1 ,v l2 ,…,v ln The coordinates of the grid vertex of the model M under the local coordinate system of the model M are set;
step 203: controlling the rotation of the model through the graphical user interface, sliding a model in the graphical user interface by a fingertip by a user to rotate a sliding strip, and controlling the model M to rotate by taking the vertical direction of the model M as an axis through a transform. Defining the numerical range of the sliding strip to be 1 to 360, and controlling the rotating speed range of the model to be 1/360r/s to 1r/s;
step 204: controlling the amplification and the reduction of the model through the graphical user interface, sliding a model amplification and reduction sliding bar in the graphical user interface through a fingertip by a user, and controlling the amplification and the reduction of the model M through transform. Defining the numerical range of the sliding strip to be 1-2, and controlling the size of the model to change within the range from 1 time to 2 times of the size of the model;
step 205: define two spherical colliders with the left and right finger tips LTop (x), respectively tl ,y tl ,z tl ) And RTop (x) tr ,y tr ,z tr ) Binding is carried out;
step 206: defining a deformation script and loading the deformation script to the model M, specifically, adding a grid collision device on the model M and monitoring the occurrence of a collision event; when the spherical collider bound with the finger fingertip position contacts the surface of the model M, the collider and the grid collider of the model M generate a collision event, and after the deformation script monitors the collision event, the coordinates (v) of the grid vertex of the model M under a local coordinate system of the model M are detected l1 ,v l2 ,…,v ln ) Conversion to coordinates (v) in the world coordinate System w1 ,v w2 ,…,v wn ) (ii) a The coordinates of the collision point in the world coordinate system are marked as (v) wi ,v w(i+1) ,…,v wj ) I is more than or equal to 1 and less than or equal to j and less than or equal to n, and the normal vector of the collision point is recorded as (c) wi ,c w(i+1) ,…,c wj ) And j is more than or equal to 1 and less than or equal to n, and the grid vertex position of the model M at the collision point can be changed according to the collision detection method through the following formula:
(v′ wi ,v′ w(i+1) ,…,v′ wj )=(v wi ,v w(i+1) ,…,v wj )+((c wi ,c w(i+1) ,…,c wj )×(d×f)),1≤i≤j≤n
wherein d is the change direction of the model M grid vertex, f is the change intensity, (v' wi ,v′ w(i+1) ,…,v′ wj ) Changing coordinates of the grid vertex of the displaced model M in a world coordinate system;
(v ') converting the grid vertex coordinates of the model M from the world coordinate system to the local coordinate system thereof to obtain' 1 ,v′ 2 ,…,v′ n ) From the coordinates, the vertex normal is recalculated by the mesh.RecalculateteNormals method to obtain a deformed model M, which is denoted as M '(v' 1 ,v′ 2 ,…,v′ ln ) Hereinafter, abbreviated as a model M', to achieve a model sculpturing effect;
step 207: the user clicks the re-sculpting button in the graphical user interface through the fingertip to restore the model M' to the model M, and the model M can be re-sculptured by repeating the step 206;
and step 3: the processing step of the model combination comprises the following sub-steps:
step 301: repeating the steps 201 and 202, selecting a new model, and marking the selected model as a model S;
step 302: completing the identification of the palm based on the hand data captured by Meta 2; coordinates of the left hand and the right hand in a world coordinate system are acquired in real time through a position tracking sensor on Meta2 augmented reality glasses and are respectively marked as LPAlm (x) pl ,y pl ,z pl ) And RPalm (x) pr ,y pr ,z pr ) (ii) a According to LTop (x) tl ,y tl ,z tl )、RTop(x tr ,y tr ,z tr )、LPalm(x pl ,y pl ,z pl )、RPalm(x pr ,y pr ,z pr ) Defines two hand gestures, respectively a grip Grab gesture and a Release gesture, taking the left hand as an example, when:
Figure FDA0004044000000000021
Figure FDA0004044000000000022
when the hand is in the gripping posture, otherwise, the hand is in the releasing posture, and the same principle is carried out by the right hand;
step 303: on the basis of the grasping and releasing postures, a user interacts with a model in a system main scene in a natural interaction mode through two hands; the method comprises the steps that a GrabInteraction script, a TwoHanddGrabRotatInteraction script and a TwoHanddGrab-Scale interaction script provided by Meta2 augmented reality glasses are mounted on a model, and a user can realize the operations of moving, rotating, amplifying and reducing the model in a mode of interacting with the model by one hand or two hands;
step 304: moving the model M' and the model S to mutually attached or overlapped positions through moving operation, clicking a model combination button in a graphical user interface by a fingertip, and triggering detection; detecting whether other models, namely the model S, exist in a trigger area of the model M'; if the model S exists, the other model, namely the model S is set as a sub-object of the model M ', in the subsequent operation, the position and the rotation information of the model S are consistent with the model M ', and the size is enlarged and reduced along with the model M ', so that the effect of model combination is realized;
and 4, step 4: the model export operation after the handicraft design is finished is supported; a user clicks a model export button in a graphical user interface through a fingertip to trigger a model export function; exporting the model of the model editing area into a 3D model file in an OBJ format and a material library file in an MTL format; and storing the two files into a local folder designated by a user, wherein the 0BJ format file is used for storing 3D model information, and the MTL format file is used for storing material information of the model, so that the model designed under the system can be opened and edited in other 3D modeling software or printed by using a 3D printer.
CN201810894635.9A 2018-08-08 2018-08-08 Meta 2-based natural interaction handicraft creative design system Active CN109308741B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810894635.9A CN109308741B (en) 2018-08-08 2018-08-08 Meta 2-based natural interaction handicraft creative design system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810894635.9A CN109308741B (en) 2018-08-08 2018-08-08 Meta 2-based natural interaction handicraft creative design system

Publications (2)

Publication Number Publication Date
CN109308741A CN109308741A (en) 2019-02-05
CN109308741B true CN109308741B (en) 2023-04-07

Family

ID=65225942

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810894635.9A Active CN109308741B (en) 2018-08-08 2018-08-08 Meta 2-based natural interaction handicraft creative design system

Country Status (1)

Country Link
CN (1) CN109308741B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732477A (en) * 2015-02-04 2015-06-24 长春理工大学 Register tracking method based on electromagnetic position tracker and motion capture system
CN106406875A (en) * 2016-09-09 2017-02-15 华南理工大学 Virtual digital sculpture method based on natural gesture
CN108334198A (en) * 2018-02-09 2018-07-27 华南理工大学 Virtual sculpting method based on augmented reality

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8606657B2 (en) * 2009-01-21 2013-12-10 Edgenet, Inc. Augmented reality method and system for designing environments and buying/selling goods
US9916691B2 (en) * 2013-02-14 2018-03-13 Seiko Epson Corporation Head mounted display and control method for head mounted display
IL298018B2 (en) * 2013-03-11 2024-04-01 Magic Leap Inc System and method for augmented and virtual reality
SE537621C2 (en) * 2013-09-10 2015-08-11 Scania Cv Ab Detection of objects using a 3D camera and a radar
CN103955267B (en) * 2013-11-13 2017-03-15 上海大学 Both hands man-machine interaction method in x ray fluoroscopy x augmented reality system
CN106294918A (en) * 2015-06-10 2017-01-04 中国科学院宁波材料技术与工程研究所 A kind of method for designing of virtual transparence office system
CN107885327B (en) * 2017-10-27 2020-11-13 长春理工大学 Fingertip detection method based on Kinect depth information

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732477A (en) * 2015-02-04 2015-06-24 长春理工大学 Register tracking method based on electromagnetic position tracker and motion capture system
CN106406875A (en) * 2016-09-09 2017-02-15 华南理工大学 Virtual digital sculpture method based on natural gesture
CN108334198A (en) * 2018-02-09 2018-07-27 华南理工大学 Virtual sculpting method based on augmented reality

Also Published As

Publication number Publication date
CN109308741A (en) 2019-02-05

Similar Documents

Publication Publication Date Title
Arora et al. Symbiosissketch: Combining 2d & 3d sketching for designing detailed 3d objects in situ
Weichel et al. MixFab: a mixed-reality environment for personal fabrication
Shaw et al. Two-handed polygonal surface design
Sachs et al. 3-Draw: A tool for designing 3D shapes
Wang et al. Real-time hand-tracking with a color glove
US7106334B2 (en) Animation creation program
Sheng et al. An interface for virtual 3D sculpting via physical proxy.
TWI827633B (en) System and method of pervasive 3d graphical user interface and corresponding readable medium
CN107992858A (en) A kind of real-time three-dimensional gesture method of estimation based on single RGB frame
Mendes et al. Mid-air modeling with Boolean operations in VR
Smith et al. Digital foam interaction techniques for 3D modeling
CN110543230A (en) Stage lighting element design method and system based on virtual reality
Nishino et al. 3d object modeling using spatial and pictographic gestures
Olsen et al. A Taxonomy of Modeling Techniques using Sketch-Based Interfaces.
CN109308741B (en) Meta 2-based natural interaction handicraft creative design system
CN113887497A (en) Three-dimensional sketch drawing method in virtual reality based on gesture drawing surface
Cho et al. 3D volume drawing on a potter's wheel
Eitsuka et al. Authoring animations of virtual objects in augmented reality-based 3d space
Stork et al. Sketching free-forms in semi-immersive virtual environments
Mohanty et al. Kinesthetically augmented mid-air sketching of multi-planar 3D curve-soups
Casti et al. CageLab: an Interactive Tool for Cage-Based Deformations.
Varga et al. Survey and investigation of hand motion processing technologies for compliance with shape conceptualization
Nan et al. vdesign: Toward image segmentation and composition in cave using finger interactions
Qin et al. Use of three-dimensional body motion to free-form surface design
Arora Creative visual expression in immersive 3D environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant