Nothing Special   »   [go: up one dir, main page]

CN115756153A - Virtual Reality (VR) interaction system and method based on meta universe - Google Patents

Virtual Reality (VR) interaction system and method based on meta universe Download PDF

Info

Publication number
CN115756153A
CN115756153A CN202211193112.4A CN202211193112A CN115756153A CN 115756153 A CN115756153 A CN 115756153A CN 202211193112 A CN202211193112 A CN 202211193112A CN 115756153 A CN115756153 A CN 115756153A
Authority
CN
China
Prior art keywords
user
face
dimensional
acquiring
virtual reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211193112.4A
Other languages
Chinese (zh)
Inventor
郑敏波
曹威
徐玫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Guangcheng Aluminum Co ltd
Guangya Aluminium Co ltd
Original Assignee
Foshan Guangcheng Aluminum Co ltd
Guangya Aluminium Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan Guangcheng Aluminum Co ltd, Guangya Aluminium Co ltd filed Critical Foshan Guangcheng Aluminum Co ltd
Priority to CN202211193112.4A priority Critical patent/CN115756153A/en
Publication of CN115756153A publication Critical patent/CN115756153A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention provides a Virtual Reality (VR) interaction system and method based on a meta universe, wherein the system comprises: scanning means for scanning a target physical space to obtain 3D scan data; acquiring a space numerical model of a three-dimensional scene model according to the 3D scanning data; VR head equipment for performing virtual display on the three-dimensional space scene model; the mobile instruction terminal is used for acquiring the mobile intention of the user; the mobile instruction terminal obtains a user movement intention by obtaining a user input instruction and feeds the user movement intention back to the VR head-mounted device, and the VR head-mounted device updates the three-dimensional space scene model which is displayed in a virtualized mode according to the user movement intention so as to be presented to the user. The invention can bring more vivid and real virtual reality experience to the user and improve the sense of reality.

Description

Virtual reality VR interaction system and method based on metauniverse
Technical Field
The invention relates to the technical field of a meta universe, in particular to a Virtual Reality (VR) interaction system and method based on the meta universe.
Background
The metauniverse (Metaverse) is a virtual world which is linked and created by using a technological means, is mapped and interacted with the real world and is provided with a digital living space of a novel social system. The meta universe is essentially a virtualization and digitization process of the real world, requiring a great deal of modification to content production, economic systems, user experience, and physical world content, among others. However, the development of the meta universe is gradual, and is finally shaped by continuously fusing and evolving a plurality of tools and platforms under the support of shared infrastructure, standards and protocols. The method provides immersive experience based on an augmented reality technology, generates a mirror image of a real world based on a digital twin technology, builds an economic system based on a block chain technology, fuses the virtual world and the real world closely on the economic system, a social system and an identity system, and allows each user to perform content production and world editing.
The virtual reality VR in the universe is a virtual environment which is displayed in a virtual time and space, the virtual reality VR technology can enable an experiencer to experience immersive immersion, and along with the development of the virtual reality VR in the universe, a more real game effect can be brought to people, the shadow watching experience of people is improved, and the like.
When the conventional virtual reality VR interactive system in the Yuan universe is used, a user can only passively receive virtual scene images and cannot acquire different virtual scene images according to own will and human face postures, so that the experience effect of the user is reduced to a certain extent, and the sense of reality is also reduced.
Disclosure of Invention
Based on this, in order to improve the experience effect of the user, the invention provides a Virtual Reality (VR) interaction system and method based on a meta universe, and the specific technical scheme is as follows:
a metauniverse Virtual Reality (VR) based interaction system, comprising:
scanning means for scanning a target physical space to obtain 3D scan data;
acquiring a space numerical model of the three-dimensional scene model according to the 3D scanning data;
VR head equipment for performing virtual display on the three-dimensional space scene model;
the mobile instruction terminal is used for acquiring the mobile intention of the user;
the mobile instruction terminal obtains a user movement intention by obtaining a user input instruction and feeds the user movement intention back to the VR head-mounted device, and the VR head-mounted device updates the three-dimensional space scene model which is displayed in a virtualized mode according to the user movement intention so as to be presented to the user.
The Virtual Reality (VR) interaction system based on the meta-universe acquires 3D scanning data of a target physical space, acquires a corresponding three-dimensional space scene model according to the 3D scanning data, performs virtualization display on the three-dimensional space scene model through VR head-mounted equipment, and updates the three-dimensional space scene model in real time according to user wishes and human face postures, so that more vivid and real virtual reality experience can be brought to a user, and the sense of reality is improved.
Further, the mobile instruction terminal comprises a plurality of input buttons for acquiring user input instructions, and the input buttons respectively correspond to different user input instructions.
Further, the system for interacting based on the Virtual Reality (VR) of the meta universe further comprises:
the system comprises a plurality of image acquisition devices, a plurality of image processing devices and a plurality of image processing devices, wherein the image acquisition devices are used for acquiring a face image of a user and processing a head portrait of the face of the user to acquire key feature points, constructing a three-dimensional face image of the user according to the key feature points and acquiring a face posture of the user according to the three-dimensional face image of the user;
the user movement will comprise a user input instruction and a user face gesture.
Further, the key feature points include the corners of the eyes, the corners of the mouth, the nose, the center of the pupil, and the edges of the eyes.
Further, the image acquisition device is a high-definition camera.
A Virtual Reality (VR) interaction method based on a meta universe comprises the following steps:
s1, scanning a target physical space to acquire 3D scanning data;
s2, acquiring a three-dimensional space scene model according to the 3D scanning data;
s3, performing virtualization display on the three-dimensional space scene model through VR head-mounted equipment;
s4, acquiring a user movement intention and a face posture and feeding back the user movement intention and the face posture to VR head-mounted equipment;
and S5, updating the three-dimensional space scene model which is displayed in a virtualized mode by the VR head-mounted equipment according to the movement will and the face posture of the user.
Further, the interaction method based on the Virtual Reality (VR) of the meta universe further comprises the following steps:
acquiring a user input instruction;
acquiring a face image of a user and processing a face head image of the user to acquire key feature points;
constructing a three-dimensional image of the face of the user according to the key feature points and acquiring the face posture of the user according to the three-dimensional image of the face of the user;
the user movement will comprise a user input instruction and a user face gesture.
Further, the key feature points include the corners of the eyes, the corners of the mouth, the nose, the center of the pupil, and the edges of the eyes.
A computer-readable storage medium storing a computer program which, when executed by a processor, implements the meta-universe virtual reality VR based interaction method.
Drawings
The invention will be further understood from the following description in conjunction with the accompanying drawings. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the embodiments. Like reference numerals designate corresponding parts throughout the different views.
Fig. 1 is a schematic overall flow chart of a VR interaction method based on a meta-universe virtual reality according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an overall structure of a VR interaction system based on a meta-universe virtual reality according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to embodiments thereof. It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present. The terms "vertical," "horizontal," "left," "right," and the like as used herein are for illustrative purposes only and do not denote a single embodiment.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The terms "first" and "second" used herein do not denote any particular order or quantity, but rather are used to distinguish one element from another.
The first embodiment is as follows:
as shown in fig. 2, an embodiment of the VR interactive system based on the metauniverse virtual reality includes a scanning device configured to scan a target physical space to obtain 3D scan data, a space numeralization model configured to obtain a three-dimensional scene model according to the 3D scan data, and a VR headset configured to perform a virtualization display on the three-dimensional scene model. The VR interaction system based on the metauniverse virtual reality further comprises a mobile instruction terminal used for obtaining the movement intention of the user, the mobile instruction terminal obtains the movement intention of the user by obtaining a user input instruction and feeds the movement intention of the user back to the VR head-mounted equipment, and the VR head-mounted equipment updates the three-dimensional space scene model displayed in a virtualized mode according to the movement intention of the user so as to be presented to the user.
Since the conventional technical means in the art are to scan the target physical space to obtain 3D scan data, obtain a three-dimensional scene model according to the 3D scan data, and perform virtualization display on the three-dimensional scene model, further description is omitted here.
Specifically, the movement instruction terminal includes a plurality of input buttons for acquiring user input instructions. The plurality of input buttons respectively correspond to different user input instructions, such as forward movement, turn around, pause and the like.
The user movement will include a user input instruction and a user face gesture.
The VR interaction system based on the metauniverse virtual reality further comprises a plurality of image acquisition devices, wherein the plurality of image acquisition devices are used for acquiring a face image of a user and processing the face head image of the user to acquire key feature points, and a three-dimensional face image of the user is constructed according to the key feature points and a face posture of the user is acquired according to the three-dimensional face image of the user.
A plurality of user face images at different angles are shot through a plurality of image acquisition devices, and key feature points of the user face images are extracted. Wherein the key feature points include the corners of the eyes, the corners of the mouth, the nose, the center of the pupil, and the edges of the eyes. The image acquisition device includes, but is not limited to, a high definition camera, which is installed around the user.
For example, if the user wears the VR headset to experience the meta-space virtual reality in real time, and is located in a specific space (a room, a mall, or a virtual reality experience hall), the plurality of first image devices are installed at different positions in the specific space and have different relative positions with respect to the user. And after key feature points of the face image of the user are extracted, constructing a three-dimensional face image according to the key feature points.
Specifically, the specific method for constructing the three-dimensional face image according to the key feature points comprises the following steps:
the method comprises the steps of firstly, carrying out gray processing on a face image of a user, extracting pupil edge information, and determining the center of a through hole of the face image according to the pupil edge information.
And secondly, detecting the eye corner, the mouth corner and the eye edge of the human face image.
And thirdly, performing corresponding matching adjustment on the key characteristic points and the characteristic points corresponding to the standard human face three-dimensional model to establish association between the standard human face three-dimensional model and the user human face image, and preliminarily adjusting the coordinate positions of the characteristic points of the standard human face three-dimensional model.
And fourthly, successively selecting the standard human face three-dimensional model according to the human face pose angles of the human face images of the plurality of users, so that the human face pose angle of the rotated standard human face three-dimensional model is the same as the human face pose angle of the corresponding human face image of the user.
And fifthly, calculating a scaling factor and a position translation factor of the standard human face three-dimensional model after each rotation relative to the corresponding user human face image according to the coordinate position of the feature point of the standard human face three-dimensional model, carrying out matching adjustment on the size and the shape of the human face of the standard human face three-dimensional model according to the scaling factor and the position translation factor, and buckling the standard human face three-dimensional model on a plurality of corresponding user human face images according to the position translation factor to obtain the user three-dimensional human face grid model.
And sixthly, performing texture mapping on the three-dimensional face grid model of the user based on the face texture image irrelevant to the viewpoint so as to construct a three-dimensional face image of the user.
After the three-dimensional image of the face of the user is constructed, the face gesture of the user is obtained according to the three-dimensional image of the face of the user, the user movement intention formed by the face gesture of the user and a user input instruction is fed back to the VR head-mounted device, and the VR head-mounted device updates the three-dimensional space scene model which is displayed in a virtualized mode according to the user movement intention so as to be presented to the user.
Therefore, the virtual reality VR interaction system based on the metauniverse obtains the 3D scanning data of the target physical space and obtains the corresponding three-dimensional space scene model according to the 3D scanning data, the three-dimensional space scene model is displayed in a virtualization mode through the VR headset, the three-dimensional space scene model is updated in real time according to the user intention and the human face posture, more vivid and real virtual reality experience can be brought to the user, and the sense of reality is improved. The user can browse and visit the scenery in the virtual reality according to the intention of the user only by utilizing the mobile instruction terminal.
Correspondingly, as shown in fig. 1, the embodiment further provides a virtual reality VR interaction method based on a meta universe, which includes the following steps:
s1, scanning a target physical space to acquire 3D scanning data;
s2, acquiring a three-dimensional space scene model according to the 3D scanning data;
and S3, performing virtual display on the three-dimensional space scene model through the VR head-mounted equipment.
S4, acquiring a user movement intention and a face posture and feeding back the user movement intention and the face posture to VR head-mounted equipment;
and S5, updating the three-dimensional space scene model which is displayed in a virtualized mode by the VR head-mounted equipment according to the movement will and the face posture of the user.
The Virtual Reality (VR) interaction method based on the meta-universe can bring more vivid and real virtual reality experience to users by acquiring 3D scanning data of a target physical space, acquiring a corresponding three-dimensional space scene model according to the 3D scanning data, performing virtualization display on the three-dimensional space scene model through VR head-mounted equipment, and updating the three-dimensional space scene model in real time according to user wishes and human face postures.
Example two:
as shown in fig. 2, an embodiment of the VR interactive system based on the metauniverse virtual reality includes a scanning device configured to scan a target physical space to obtain 3D scan data, a space numeralization model configured to obtain a three-dimensional scene model according to the 3D scan data, and a VR headset configured to perform a virtualization display on the three-dimensional scene model. The VR interaction system based on the metauniverse virtual reality further comprises a mobile instruction terminal used for obtaining the movement intention of the user, the mobile instruction terminal obtains the movement intention of the user by obtaining a user input instruction and feeds the movement intention of the user back to the VR head-mounted equipment, and the VR head-mounted equipment updates the three-dimensional space scene model displayed in a virtualized mode according to the movement intention of the user so as to be presented to the user.
Specifically, the movement instruction terminal includes a plurality of input buttons for acquiring user input instructions. The plurality of input buttons respectively correspond to different user input instructions, such as forward movement instructions, turn around instructions, pause instructions and the like.
The VR interaction system based on the metauniverse virtual reality further comprises a plurality of image acquisition devices, the plurality of image acquisition devices are used for acquiring the face images of the users and processing the head images of the faces of the users, the three-dimensional face images of the users are constructed according to the processed face images of the users, and the face postures of the users are acquired according to the three-dimensional face images of the users.
A plurality of user face images at different angles are shot through a plurality of image acquisition devices, and key feature points of the user face images are extracted. Wherein the key feature points include the corners of the eyes, the corners of the mouth, the nose, the center of the pupil, and the edges of the eyes. The image acquisition device comprises but is not limited to a high-definition camera which is installed around the user.
For example, if a user wears a VR headset to experience a meta-space virtual reality, which is located in a certain application space (a room, a mall, or a virtual reality experience hall), a plurality of first image devices are installed in different locations of the certain space and have different relative positions with respect to the user. And after key feature points of the face image of the user are extracted, constructing a three-dimensional face image according to the key feature points.
Specifically, the specific method for constructing the three-dimensional face image according to the key feature points comprises the following steps:
the method comprises the steps of firstly, carrying out gray processing on a face image of a user, extracting pupil edge information, and determining the center of a through hole of the face image according to the pupil edge information.
And secondly, detecting the canthus, the mouth corner and the eye edge of the face image.
And thirdly, performing corresponding matching adjustment on the key characteristic points and the characteristic points corresponding to the standard human face three-dimensional model to establish association between the standard human face three-dimensional model and the user human face image, and preliminarily adjusting the coordinate positions of the characteristic points of the standard human face three-dimensional model.
And fourthly, successively selecting the standard human face three-dimensional model according to the human face pose angles of the human face images of the plurality of users, so that the human face pose angle of the rotated standard human face three-dimensional model is the same as the human face pose angle of the corresponding human face image of the user.
And fifthly, calculating a scaling factor and a position translation factor of the standard human face three-dimensional model relative to the corresponding user human face image after each rotation according to the coordinate position of the characteristic point of the standard human face three-dimensional model, carrying out matching adjustment on the size and the shape of the human face of the standard human face three-dimensional model according to the scaling factor and the position translation factor, and buckling the standard human face three-dimensional model on the plurality of corresponding user human face images according to the position translation factor to obtain the user three-dimensional human face mesh model.
And sixthly, performing texture mapping on the three-dimensional face grid model of the user based on the face texture image irrelevant to the viewpoint so as to construct a three-dimensional face image of the user.
After the face three-dimensional image of the user is constructed, the face gesture of the user is obtained according to the face three-dimensional image of the user, the movement intention of the user is formed according to the face gesture of the user and a user input instruction, and the VR head-mounted device updates the three-dimensional space scene model which is displayed in a virtualization mode according to the movement intention of the user so as to be presented to the user.
Therefore, the three-dimensional space scene model can be updated in real time according to the movement will and the face posture of the user, and more vivid and real virtual reality experience is brought to the user.
As a preferred technical solution, in this embodiment, the mobile instruction terminal is further configured to obtain the interactive instruction and feed back the interactive instruction to the VR headset, and the scanning device is configured to perform stereo scanning on the user to obtain an initial three-dimensional stereo image corresponding to the user when the user enters the application space, and to assign an initial coordinate to each user.
The method comprises the steps that a user wears VR head-mounted equipment, enters into the metauniverse virtual reality and carries out VR interaction, real-time space coordinates of the user are updated according to the user intention and initial coordinates, and meanwhile, real-time three-dimensional images of the user are updated according to the face posture of the user in the user intention.
After the real-time space coordinates and the real-time three-dimensional images of the user are updated, the real-time three-dimensional data of the user can be obtained.
The VR head-mounted equipment is further used for overlaying real-time three-dimensional data of other users to the three-dimensional space scene model according to the interactive instruction and performing real-time updating and virtual display.
The VR headset includes at least one audio module including an earpiece unit and a microphone unit for ordinary voice calls. VR head mounted device communication connection that different users wore. And after the user inputs an interactive instruction, calculating the sound loudness attenuation amount according to the distance between the target user and the sound source.
Specifically, after a target user is determined, the distance between the target user and a sound source is acquired, the sound loudness attenuation amount is calculated according to the distance, the loudness of sound reaching the target user is calculated according to the sound loudness attenuation amount, and the sound is broadcasted through an audio module in VR headset.
Therefore, the distance between the target user and the sound source is obtained, the sound loudness attenuation amount is calculated, and the loudness of sound playing of the audio module is adjusted according to the sound loudness attenuation amount, so that the virtual reality is more practical, and the sense of reality is improved.
Correspondingly, the embodiment also provides a virtual reality VR interaction method based on the meta universe, which includes the following steps:
s1, scanning a target physical space to acquire 3D scanning data;
s2, acquiring a three-dimensional space scene model according to the 3D scanning data;
and S3, performing virtual display on the three-dimensional space scene model through the VR head-mounted equipment.
S4, acquiring the movement desire and the face posture of the user and feeding back the movement desire and the face posture to VR head-mounted equipment;
and S5, updating the three-dimensional space scene model which is displayed in a virtualized mode by the VR head-mounted equipment according to the movement will and the face posture of the user.
The Virtual Reality (VR) interaction method based on the metauniverse can bring more realistic virtual reality experience to users by acquiring 3D scanning data of a target physical space, acquiring a corresponding three-dimensional space scene model according to the 3D scanning data, performing virtual display on the three-dimensional space scene model through VR head-mounted equipment, and updating the three-dimensional space scene model in real time according to user wishes and human face postures.
The interaction method based on the Virtual Reality (VR) in the embodiment further comprises the following steps:
acquiring interactive instructions of all users in an application space;
acquiring real-time three-dimensional data of all users in an application space, wherein the real-time three-dimensional data comprises real-time three-dimensional images and real-time space coordinates;
and according to the interactive instruction of each user, overlaying the real-time three-dimensional data of other users to the three-dimensional space scene model, and performing real-time updating and virtual display on the three-dimensional space scene model through VR head-mounted equipment.
Specifically, when the user enters the application space, the scanning device performs stereoscopic scanning on the user to acquire an initial three-dimensional stereoscopic image corresponding to the user, and gives each user an initial coordinate. The method comprises the steps that a user wears VR head-mounted equipment, enters into the metauniverse virtual reality and carries out VR interaction, real-time space coordinates of the user are updated according to the user intention and initial coordinates, and meanwhile, real-time three-dimensional images of the user are updated according to the face posture of the user in the user intention.
After the real-time space coordinates and the real-time three-dimensional images of the user are updated, the real-time three-dimensional data of the user can be obtained.
In order to improve the sense of realism and the user experience of the virtual reality interaction system, the user input instructions further comprise a user movement pace. The user can adjust the pace of the user through the mobile instruction terminal so as to adjust the moving speed of the virtual reality VR interactive system. Preferably, the interactive system further comprises a VR treadmill on which the user stands to experience the metauniverse virtual reality. The system obtains the user through the VR treadmill and removes pace and feed back to VR head mounted device.
Therefore, the moving pace of the user is obtained through the VR running machine, the real pace of the user can be obtained, and the reality sense and the user experience of the virtual reality interaction system are improved.
According to the method, three-dimensional data of other users are superposed into the three-dimensional space scene model according to the interaction instruction of the user, and the three-dimensional space scene model is updated and displayed in a virtualized manner in real time through the VR head-mounted device, so that the sense of reality of the virtual reality VR interaction system can be improved.
The present embodiment also provides a computer-readable storage medium storing a computer program, which when executed by a processor implements the method for interaction based on a metacosmic virtual reality VR.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is specific and detailed, but not to be understood as limiting the scope of the invention. It should be noted that various changes and modifications can be made by those skilled in the art without departing from the spirit of the invention, and these changes and modifications are all within the scope of the invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (9)

1. A Virtual Reality (VR) interaction system based on a meta universe, comprising:
scanning means for scanning a target physical space to acquire 3D scan data;
acquiring a space numerical model of the three-dimensional scene model according to the 3D scanning data;
VR head-mounted equipment for performing virtual display on the three-dimensional space scene model;
the mobile instruction terminal is used for acquiring the mobile intention of the user;
the mobile instruction terminal acquires user movement intention through acquiring a user input instruction and feeds the user movement intention back to the VR head-mounted device, and the VR head-mounted device updates the three-dimensional space scene model displayed in a virtualized mode according to the user movement intention so as to be presented to the user.
2. The VR-based interaction system of claim 1, wherein the mobile command terminal includes a plurality of input buttons for obtaining user input commands, and the plurality of input buttons correspond to different user input commands respectively.
3. The metacosmic-based Virtual Reality (VR) interaction system of claim 2, further comprising:
the system comprises a plurality of image acquisition devices, a plurality of image processing devices and a plurality of image processing devices, wherein the image acquisition devices are used for acquiring a face image of a user and processing the head image of the face of the user to acquire key feature points, constructing a three-dimensional image of the face of the user according to the key feature points and acquiring the face posture of the user according to the three-dimensional image of the face of the user;
the user movement will comprise a user input instruction and a user face gesture.
4. The Virtual Reality (VR) interaction system based on metauniverse of claim 3, wherein the key feature points include corners of the eyes, corners of the mouth, the nose, the center of the pupil, and edges of the eyes.
5. The Virtual Reality (VR) interactive system based on meta space of claim 4, wherein the image capture device is a high-definition camera.
6. A Virtual Reality (VR) interaction method based on a meta universe is characterized by comprising the following steps:
s1, scanning a target physical space to acquire 3D scanning data;
s2, acquiring a three-dimensional space scene model according to the 3D scanning data;
s3, performing virtual display on the three-dimensional space scene model through VR head-mounted equipment;
s4, acquiring a user movement intention and a face posture and feeding back the user movement intention and the face posture to VR head-mounted equipment;
and S5, updating the three-dimensional space scene model displayed in a virtualization mode by the VR head-mounted equipment according to the movement will of the user and the face posture.
7. The method of claim 6, wherein the method of interaction based on the metacosmic virtual reality VR further includes the steps of:
acquiring a user input instruction;
acquiring a face image of a user and processing a face head image of the user to acquire key feature points;
constructing a three-dimensional image of the face of the user according to the key feature points and acquiring the face posture of the user according to the three-dimensional image of the face of the user;
the user movement will comprise a user input instruction and a user face gesture.
8. The method of claim 7, wherein the key feature points include an eye corner, a mouth corner, a nose, a pupil center, and an eye edge.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the meta space virtual reality VR interaction method as claimed in any one of claims 6-8.
CN202211193112.4A 2022-09-28 2022-09-28 Virtual Reality (VR) interaction system and method based on meta universe Pending CN115756153A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211193112.4A CN115756153A (en) 2022-09-28 2022-09-28 Virtual Reality (VR) interaction system and method based on meta universe

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211193112.4A CN115756153A (en) 2022-09-28 2022-09-28 Virtual Reality (VR) interaction system and method based on meta universe

Publications (1)

Publication Number Publication Date
CN115756153A true CN115756153A (en) 2023-03-07

Family

ID=85350541

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211193112.4A Pending CN115756153A (en) 2022-09-28 2022-09-28 Virtual Reality (VR) interaction system and method based on meta universe

Country Status (1)

Country Link
CN (1) CN115756153A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024197726A1 (en) * 2023-03-30 2024-10-03 西门子股份公司 Data processing method and apparatus

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024197726A1 (en) * 2023-03-30 2024-10-03 西门子股份公司 Data processing method and apparatus

Similar Documents

Publication Publication Date Title
US10460512B2 (en) 3D skeletonization using truncated epipolar lines
US9842433B2 (en) Method, apparatus, and smart wearable device for fusing augmented reality and virtual reality
WO2018086295A1 (en) Application interface display method and apparatus
US11151796B2 (en) Systems and methods for providing real-time composite video from multiple source devices featuring augmented reality elements
CN108292489A (en) Information processing unit and image generating method
CN110413108B (en) Virtual picture processing method, device and system, electronic equipment and storage medium
KR102214263B1 (en) Image generating apparatus, image generating method, computer program, and recording medium
CN104035760A (en) System capable of realizing immersive virtual reality over mobile platforms
US11375559B2 (en) Communication connection method, terminal device and wireless communication system
JP7234264B2 (en) MIXED REALITY DISPLAY DEVICE AND MIXED REALITY DISPLAY METHOD
EP3971838A1 (en) Personalized face display method and apparatus for three-dimensional character, and device and storage medium
US20230179756A1 (en) Information processing device, information processing method, and program
CN111459432B (en) Virtual content display method and device, electronic equipment and storage medium
CN110427227B (en) Virtual scene generation method and device, electronic equipment and storage medium
CN207051853U (en) A kind of immersive VR experiencing system
CN113206993A (en) Method for adjusting display screen and display device
CN112272817A (en) Method and apparatus for providing audio content in immersive reality
CN115756153A (en) Virtual Reality (VR) interaction system and method based on meta universe
WO2024131479A1 (en) Virtual environment display method and apparatus, wearable electronic device and storage medium
CN117043823A (en) Information processing device, information processing method, and program
JP6223614B1 (en) Information processing method, information processing program, information processing system, and information processing apparatus
US20230386147A1 (en) Systems and Methods for Providing Real-Time Composite Video from Multiple Source Devices Featuring Augmented Reality Elements
US20240036327A1 (en) Head-mounted display and image displaying method
WO2019030427A1 (en) Three-dimensional video processing
JP2018198025A (en) Image processing device, image processing device control method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination