CN111161422A - Model display method for enhancing virtual scene implementation - Google Patents
Model display method for enhancing virtual scene implementation Download PDFInfo
- Publication number
- CN111161422A CN111161422A CN201911285290.8A CN201911285290A CN111161422A CN 111161422 A CN111161422 A CN 111161422A CN 201911285290 A CN201911285290 A CN 201911285290A CN 111161422 A CN111161422 A CN 111161422A
- Authority
- CN
- China
- Prior art keywords
- real
- virtual scene
- camera
- image
- virtual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 230000002708 enhancing effect Effects 0.000 title claims abstract description 12
- 230000000007 visual effect Effects 0.000 claims abstract description 20
- 238000004364 calculation method Methods 0.000 claims abstract description 7
- 230000004927 fusion Effects 0.000 claims abstract description 7
- 230000000694 effects Effects 0.000 claims description 20
- 239000000463 material Substances 0.000 claims description 12
- 230000003190 augmentative effect Effects 0.000 claims description 10
- 238000004422 calculation algorithm Methods 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 238000009434 installation Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- 230000008859 change Effects 0.000 claims description 3
- 239000003086 colorant Substances 0.000 claims description 3
- 238000012856 packing Methods 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims 1
- 238000005094 computer simulation Methods 0.000 abstract description 3
- 238000012800 visualization Methods 0.000 abstract description 2
- 238000004088 simulation Methods 0.000 description 8
- 238000011161 development Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/66—Methods for processing data by generating or executing the game program for rendering three dimensional images
- A63F2300/6661—Methods for processing data by generating or executing the game program for rendering three dimensional images for changing the position of the virtual camera
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the invention discloses a model display method for enhancing virtual scene realization, which comprises the following steps: a camera captures image information of a real space; the game engine reads the real image of the camera and superposes the real image and the virtual scene to simulate the real image; the method comprises the steps that an enhanced virtual scene implementation framework obtains image data shot by a camera in a real space in real time, and the position of a real image in a virtual scene is modified in real time according to the position of the camera in the real space; according to the scheme, the relative positions of the virtual scene and the real scene are analyzed by using the data of the tracking equipment of the visual inertial ranging method, the alignment of a coordinate system is realized, the fusion calculation of the virtual scene is carried out, the function of completely real-time dynamic simulation is realized, and the visualization degree is high.
Description
Technical Field
The embodiment of the invention relates to the technical field of 3D simulation, in particular to a model display method for enhancing virtual scene realization.
Background
ARKit is the AR development platform launched by apple in 2017 at the WWDC. Developers can create augmented reality applications using the set of tools, iPhone and iPad. The ARKit is used for AR augmented reality development, supports two devices to share the same virtual article, and makes the AR experience more interesting. 6/5.2018, the apple publishes iOS 12 on a meeting, a plurality of AR related functions are added on the iOS 12, such as an AR measuring tool and an AR multi-player interaction function, the measurement can be realized in an AR scene by using a mobile phone, the scene of the AR high-play game is demonstrated on site, except for the building blocks, stories in the high-play building block world can be explored, and even high-play children are dynamic. The AR function can also be applied in many scenes, such as news applications, and AR display of pictures can be realized in web pages.
However, the conventional ARKit simulation display method can only realize static three-dimensional simulation but cannot realize dynamic three-dimensional simulation, so that multiple application functions of an ARKit development platform are limited.
Disclosure of Invention
Therefore, the embodiment of the invention provides a model display method for enhancing virtual scene realization, which adopts a visual inertial ranging method to track data of equipment to analyze relative positions of a virtual scene and a real scene, realizes the alignment of a coordinate system and the fusion calculation of the virtual scene, and has a completely instant dynamic simulation function, so as to solve the problems that an ARKit simulation display method in the prior art can only realize static three-dimensional simulation but can not realize dynamic three-dimensional simulation, and limits multiple application functions of an ARKit development platform.
In order to achieve the above object, an embodiment of the present invention provides the following: a model display method for enhancing virtual scene implementation comprises the following steps:
step 100, a camera captures image information of a real space;
200, reading a real image shot by a camera in real time by a game engine, acquiring and processing data of the real image by an enhanced virtual scene implementation framework, and superposing the real image and the virtual scene to simulate the real image;
and 300, modifying the position of the real image in the virtual scene in real time according to the orientation of the camera in the real space, and completing the conversion from the real image to the 3D virtual image. .
As a preferred embodiment of the present invention, in step 100, the AR camera management class provided by the enhanced virtual scene implementation framework installation package captures the image captured by the camera in the game engine.
As a preferred embodiment of the present invention, in step 200, reading, in the game engine, the real image captured by the camera through the AR video provided by the augmented virtual scene implementation framework installation package, and then overlapping the real image with the 3D virtual scene to obtain a 3D virtual image of the real image.
As a preferred aspect of the present invention, in step 300, the method for corresponding the orientation of the image captured by the camera in the real space to the position in the virtual scene specifically includes:
301, creating a real three-dimensional coordinate system related to the moving position of the camera in a real space;
step 302, tracking the six-axis posture of the camera in real time in a three-dimensional space of a real three-dimensional coordinate system, and automatically acquiring the corresponding direction and position of the camera;
303, creating a three-dimensional virtual coordinate system related to the virtual scene, tracking data of a camera by using a visual inertial ranging method for enhancing the virtual scene to realize a frame, and analyzing the relative position of a real image in the virtual scene;
and 304, aligning the coordinates of the real image with the coordinates of the three-dimensional virtual coordinate system of the virtual scene, and performing fusion calculation of the real image in the virtual scene.
As a preferred aspect of the present invention, in step 304, the manner of aligning the coordinates of the real image in the real three-dimensional coordinate system of the virtual scene specifically is:
selecting a plurality of characteristic points representing image contour information in a real image,
tracking the position change of the characteristic points in the camera in a three-dimensional coordinate system of a real space by using a visual inertial ranging method in the enhanced virtual scene realization frame;
and changing the proportion and the visual angle of the characteristic points to form a virtual multi-directional visual angle, and displaying a virtual image of the virtual multi-directional visual angle in a three-dimensional virtual coordinate system.
As a preferred solution of the present invention, in step 302, before each frame of image obtained by the camera is refreshed, the position of the user is recalculated, and a point in the real world is matched with a frame of pixels on the camera sensor by the camera system to track the gesture of the user.
As a preferable aspect of the present invention, the six-axis movement of the real image in the real three-dimensional coordinate system includes translation along three coordinate axes x, y, and z perpendicular to each other and rotation along three coordinate axes x, y, and z perpendicular to each other, and the real image is displayed only in a fixed position in the three-dimensional virtual coordinate system, that is, movement or rotation when being captured by the camera, and does not affect the position of the real image in the three-dimensional virtual coordinate system.
As a preferable scheme of the present invention, after the step 300 completes the conversion from the real image to the 3D virtual image, the method further includes a step of adding a special effect to the 3D virtual image, where the step of adding the special effect specifically includes:
classifying picture resource materials needing to be added;
packing into a Unity3D atlas by using a texture bag tool as required;
the sequence frames in the graph set are imported into the animation component by the Unity3D animation component to make a special effect animation.
As a preferable scheme of the invention, a material management module in a game engine is used for replacing a material ball picture in a special effect animation to distinguish line colors, and an algorithm is used for controlling a UV normal of a material ball to realize a special effect of line flowing.
As a preferable scheme of the invention, the UI management system in the game engine is used for putting the special effect animation into the virtual world coordinates, and the playing sequence of the UI special effect animation is controlled by using an algorithm.
The embodiment of the invention has the following advantages:
in the embodiment, the AR camera is responsible for capturing the camera picture, the game engine builds the 3D virtual scene, and then the virtual object is displayed in the real scene. The relative positions of the virtual scene and the real scene are analyzed by tracking the data of the equipment by the visual inertial ranging method, the alignment of the coordinate system is realized, the fusion calculation of the virtual scene is carried out, the function of completely real-time dynamic simulation is realized, and the visualization degree is high.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It should be apparent that the drawings in the following description are merely exemplary, and that other embodiments can be derived from the drawings provided by those of ordinary skill in the art without inventive effort.
FIG. 1 is a block diagram of a 3D virtual model display method according to an embodiment of the present invention;
fig. 2 is a block diagram illustrating an association relationship between an enhanced virtual scene implementation framework and a game engine according to an embodiment of the present invention.
Detailed Description
The present invention is described in terms of particular embodiments, other advantages and features of the invention will become apparent to those skilled in the art from the following disclosure, and it is to be understood that the described embodiments are merely exemplary of the invention and that it is not intended to limit the invention to the particular embodiments disclosed. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1 and 2, the invention provides a model display method for implementing an enhanced virtual scene, in the embodiment, an AR camera is responsible for capturing a camera picture, a game engine builds a 3D virtual scene, and then a virtual object is displayed in a real scene. And tracking the data of the equipment by using a visual inertial ranging method to analyze the relative positions of the virtual scene and the real scene, so as to realize the alignment of the coordinate system and perform fusion calculation of the virtual scene.
The method specifically comprises the following steps:
step 100, a camera captures image information in real space.
In step 100, a picture taken by a Camera is captured in the game engine by an AR Camera Manager (AR Camera management class) provided by the ARkit SDK.
The ARkit is an added framework of the iOS11 system released by the apple, and can realize the AR technical function in the simplest and most rapid mode.
The enhanced virtual scene implementation framework provides two AR techniques: augmented reality implemented based on 3D scenes, and augmented reality implemented based on 2D scenes.
In creating a virtual 3D model in the ARKit, the real world image is captured first with an AR camera, which is only responsible for capturing the image and does not participate in the processing of the data. The method belongs to a link in a 3D scene, and each 3D scene has a camera, so that the view of an object is determined.
200, reading a real image shot by a camera in real time by a game engine, acquiring and processing data of the real image by an enhanced virtual scene implementation framework, and superposing the real image and the virtual scene to simulate the real image;
and reading a real image shot by a camera through a Unity AR Video (AR Video class) provided by an ARkit SDK in the game engine, and then superposing the real image and the 3D virtual scene to obtain a 3D virtual image of the real image.
And 300, modifying the position of the real image in the virtual scene in real time according to the orientation of the camera in the real space, and completing the conversion from the real image to the 3D virtual image.
The method for corresponding the orientation of the shot display image in the real space to the position in the virtual scene specifically comprises the following steps:
step 301, a real three-dimensional coordinate system related to the moving position of the camera is created in real space.
And 302, tracking the six-axis posture of the camera in real time in a three-dimensional space of a real three-dimensional coordinate system, and automatically acquiring the corresponding direction and position of the camera.
Step 303, creating a three-dimensional virtual coordinate system related to the virtual scene, tracking data of the camera by using a visual inertial ranging method for enhancing the virtual scene to realize a frame, and analyzing the relative position of the real image in the virtual scene.
In the three steps, the corresponding relation between a real space and a virtual space is established through the ARkit visual inertial ranging method, and the space position of the camera (the 6-axis posture on the real three-dimensional coordinate system space) is tracked in real time through software, namely the position of the user is recalculated before each frame of image acquired by the camera is refreshed. A point in the real world is matched to a frame of pixels on the camera sensor by the camera system to track the user's pose.
The six-axis movement of the real image in the real three-dimensional coordinate system comprises translation along three coordinate axes x, y and z which are perpendicular to each other and rotation along the three coordinate axes x, y and z which are perpendicular to each other, and the real image is only displayed at a fixed position of the three-dimensional virtual coordinate system, namely, the movement or the rotation when the camera shoots does not influence the position of the real image in the three-dimensional virtual coordinate system.
That is to say, each frame of image acquired by the camera is displayed at a fixed position of the three-dimensional virtual coordinate system, so that with the movement of the camera, the real image is translated in the three-dimensional coordinate system of the real space, and the 3D virtual image is changed at the fixed position of the three-dimensional virtual coordinate system every frame, so that the image at the fixed position of the three-dimensional virtual coordinate system is continuously updated and changed, and a user can view the 3D virtual image of each scene in the real space.
And 304, aligning the coordinates of the real image with the coordinates of the three-dimensional virtual coordinate system of the virtual scene, and performing fusion calculation of the real image in the virtual scene.
The way of aligning the coordinates of the real image in the real three-dimensional coordinate system of the virtual scene one by one is specifically as follows:
(1) selecting a plurality of characteristic points representing image contour information in a real image,
(2) tracking the position change of the characteristic points in the camera in a three-dimensional coordinate system of a real space by using a visual inertial ranging method in the enhanced virtual scene realization frame;
(3) and changing the proportion and the visual angle of the characteristic points to form a virtual multi-directional visual angle, and displaying a virtual image of the virtual multi-directional visual angle in a three-dimensional virtual coordinate system.
Therefore, the information of the 3D virtual image is guaranteed not to be lost, and the 3D virtual image is virtually formed completely according to the image characteristics shot in the real space.
After the three-dimensional simulation is carried out on the real image, the method also comprises the step of adding a special effect on the 3D virtual image, and the step of adding the special effect specifically comprises the following steps:
firstly, classifying picture resource materials needing to be added;
then packing into a Unity3D atlas by using a texture bag tool as required;
and finally, importing the sequence frames in the image set into an animation component through a Unity3D animation component to manufacture special effect animation.
Meanwhile, a shader (material management class) in the game engine is used for replacing a model material ball picture to distinguish line colors, an algorithm is used for controlling a UV normal of a material ball to achieve a special effect of flowing lines, a UI management system in the game engine is used for placing a special effect animation into a virtual world coordinate, and the playing sequence of the UI special effect animation is controlled by the algorithm.
Although the invention has been described in detail above with reference to a general description and specific examples, it will be apparent to one skilled in the art that modifications or improvements may be made thereto based on the invention. Accordingly, such modifications and improvements are intended to be within the scope of the invention as claimed.
Claims (10)
1. A model display method for enhancing virtual scene implementation is characterized by comprising the following steps:
step 100, a camera captures image information of a real space;
200, reading a real image shot by a camera in real time by a game engine, acquiring and processing data of the real image by an enhanced virtual scene implementation framework, and superposing the real image and the virtual scene to simulate the real image;
and 300, modifying the position of the real image in the virtual scene in real time according to the orientation of the camera in the real space, and completing the conversion from the real image to the 3D virtual image.
2. The model exhibition method for augmented virtual scene implementation of claim 1, wherein in step 100, the images captured by the camera are captured in the game engine through the AR camera management class provided by the augmented virtual scene implementation framework installation package.
3. The model display method for implementing the augmented virtual scene according to claim 1, wherein in step 200, an AR video provided by the augmented virtual scene implementation framework installation package is read to a real image shot by a camera in the game engine, and then the real image and the 3D virtual scene are overlapped to obtain a 3D virtual image of the real image.
4. The method according to claim 1, wherein in step 300, the method for mapping the orientation of the image captured by the camera in the real space to the position in the virtual scene specifically comprises:
301, creating a real three-dimensional coordinate system related to the moving position of the camera in a real space;
step 302, tracking the six-axis posture of the camera in real time in a three-dimensional space of a real three-dimensional coordinate system, and automatically acquiring the corresponding direction and position of the camera;
303, creating a three-dimensional virtual coordinate system related to the virtual scene, tracking data of a camera by using a visual inertial ranging method for enhancing the virtual scene to realize a frame, and analyzing the relative position of a real image in the virtual scene;
and 304, aligning the coordinates of the real image with the coordinates of the three-dimensional virtual coordinate system of the virtual scene, and performing fusion calculation of the real image in the virtual scene.
5. The method of claim 4, wherein in step 304, the manner of aligning the coordinates of the real image in the real three-dimensional coordinate system of the virtual scene is specifically as follows:
selecting a plurality of characteristic points representing image contour information in a real image,
tracking the position change of the characteristic points in the camera in a three-dimensional coordinate system of a real space by using a visual inertial ranging method in the enhanced virtual scene realization frame;
and changing the proportion and the visual angle of the characteristic points to form a virtual multi-directional visual angle, and displaying a virtual image of the virtual multi-directional visual angle in a three-dimensional virtual coordinate system.
6. The method of claim 4, wherein in step 302, before each frame of image captured by the camera is refreshed, the user's position is recalculated, and a point in the real world is matched with a frame of pixels on the camera sensor by the camera system to track the user's pose.
7. The method for model exhibition of augmented virtual scene implementation of claim 6, wherein the six-axis movement of the real image in the real three-dimensional coordinate system includes translation along three mutually perpendicular coordinate axes x, y, z and rotation along three mutually perpendicular coordinate axes x, y, z, and the real image is displayed only in a fixed position of the three-dimensional virtual coordinate system, i.e. the movement or rotation when being taken by the camera, without affecting the position of the real image in the three-dimensional virtual coordinate system.
8. The model display method for enhancing the implementation of a virtual scene according to claim 1, wherein after the conversion from the real image to the 3D virtual image is completed in step 300, the method further comprises a step of adding a special effect to the 3D virtual image, and the step of adding a special effect specifically comprises:
classifying picture resource materials needing to be added;
packing into a Unity3D atlas by using a texture bag tool as required;
the sequence frames in the graph set are imported into the animation component by the Unity3D animation component to make a special effect animation.
9. The model display method for enhancing virtual scene implementation of claim 8, wherein a material management module in a game engine is used to replace a material ball picture in a special effect animation to distinguish line colors, and an algorithm is used to control a UV normal of the material ball to implement a special effect of line flowing.
10. The method of claim 9, wherein the UI management system in the game engine puts the special effect animation into the virtual world coordinates, and the playing sequence of the UI special effect animation is controlled by an algorithm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911285290.8A CN111161422A (en) | 2019-12-13 | 2019-12-13 | Model display method for enhancing virtual scene implementation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911285290.8A CN111161422A (en) | 2019-12-13 | 2019-12-13 | Model display method for enhancing virtual scene implementation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111161422A true CN111161422A (en) | 2020-05-15 |
Family
ID=70556921
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911285290.8A Pending CN111161422A (en) | 2019-12-13 | 2019-12-13 | Model display method for enhancing virtual scene implementation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111161422A (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111638793A (en) * | 2020-06-04 | 2020-09-08 | 浙江商汤科技开发有限公司 | Aircraft display method and device, electronic equipment and storage medium |
CN111651057A (en) * | 2020-06-11 | 2020-09-11 | 浙江商汤科技开发有限公司 | Data display method and device, electronic equipment and storage medium |
CN111744202A (en) * | 2020-06-29 | 2020-10-09 | 完美世界(重庆)互动科技有限公司 | Method and device for loading virtual game, storage medium and electronic device |
CN111833462A (en) * | 2020-07-14 | 2020-10-27 | 深圳市瑞立视多媒体科技有限公司 | Cutting method, device and equipment based on illusion engine and storage medium |
CN111915737A (en) * | 2020-08-11 | 2020-11-10 | 厦门长辉实业有限公司 | Human-object interaction system based on augmented reality |
CN112348933A (en) * | 2020-11-18 | 2021-02-09 | 北京达佳互联信息技术有限公司 | Animation generation method and device, electronic equipment and storage medium |
CN112634420A (en) * | 2020-12-22 | 2021-04-09 | 北京达佳互联信息技术有限公司 | Image special effect generation method and device, electronic equipment and storage medium |
CN112619140A (en) * | 2020-12-18 | 2021-04-09 | 网易(杭州)网络有限公司 | Method and device for determining position in game and method and device for adjusting path |
CN112884888A (en) * | 2021-03-23 | 2021-06-01 | 中德(珠海)人工智能研究院有限公司 | Exhibition display method, system, equipment and medium based on mixed reality |
CN112945222A (en) * | 2021-01-27 | 2021-06-11 | 杭州钱航船舶修造有限公司 | Ship driving-assistant glasses image fusion method and system based on field direction |
CN113347373A (en) * | 2021-06-16 | 2021-09-03 | 潍坊幻视软件科技有限公司 | Image processing method for making special-effect video in real time through AR space positioning |
CN113542609A (en) * | 2021-07-20 | 2021-10-22 | 大连东软信息学院 | Graduation memory system based on augmented reality technology and panoramic video technology and use method |
CN113643443A (en) * | 2021-10-13 | 2021-11-12 | 潍坊幻视软件科技有限公司 | Positioning system for AR/MR technology |
CN114760458A (en) * | 2022-04-28 | 2022-07-15 | 中南大学 | Method for synchronizing tracks of virtual camera and real camera of high-reality augmented reality studio |
CN116320364A (en) * | 2023-05-25 | 2023-06-23 | 四川中绳矩阵技术发展有限公司 | Virtual reality shooting method and display method based on multi-layer display |
CN116896608A (en) * | 2023-09-11 | 2023-10-17 | 山东省地震局 | Virtual earthquake scene playing system based on mobile equipment propagation |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106097435A (en) * | 2016-06-07 | 2016-11-09 | 北京圣威特科技有限公司 | A kind of augmented reality camera system and method |
CN107331220A (en) * | 2017-09-01 | 2017-11-07 | 国网辽宁省电力有限公司锦州供电公司 | Transformer O&M simulation training system and method based on augmented reality |
CN109471521A (en) * | 2018-09-05 | 2019-03-15 | 华东计算技术研究所(中国电子科技集团公司第三十二研究所) | Virtual and real shielding interaction method and system in AR environment |
-
2019
- 2019-12-13 CN CN201911285290.8A patent/CN111161422A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106097435A (en) * | 2016-06-07 | 2016-11-09 | 北京圣威特科技有限公司 | A kind of augmented reality camera system and method |
CN107331220A (en) * | 2017-09-01 | 2017-11-07 | 国网辽宁省电力有限公司锦州供电公司 | Transformer O&M simulation training system and method based on augmented reality |
CN109471521A (en) * | 2018-09-05 | 2019-03-15 | 华东计算技术研究所(中国电子科技集团公司第三十二研究所) | Virtual and real shielding interaction method and system in AR environment |
Non-Patent Citations (1)
Title |
---|
马慧民等编著: "智能新零售 数据智能时代的零售业变革", 华中科技大学出版社, pages: 133 * |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111638793B (en) * | 2020-06-04 | 2023-09-01 | 浙江商汤科技开发有限公司 | Display method and device of aircraft, electronic equipment and storage medium |
CN111638793A (en) * | 2020-06-04 | 2020-09-08 | 浙江商汤科技开发有限公司 | Aircraft display method and device, electronic equipment and storage medium |
CN111651057A (en) * | 2020-06-11 | 2020-09-11 | 浙江商汤科技开发有限公司 | Data display method and device, electronic equipment and storage medium |
CN111744202A (en) * | 2020-06-29 | 2020-10-09 | 完美世界(重庆)互动科技有限公司 | Method and device for loading virtual game, storage medium and electronic device |
CN111833462A (en) * | 2020-07-14 | 2020-10-27 | 深圳市瑞立视多媒体科技有限公司 | Cutting method, device and equipment based on illusion engine and storage medium |
CN111833462B (en) * | 2020-07-14 | 2024-05-17 | 深圳市瑞立视多媒体科技有限公司 | Cutting method, device, equipment and storage medium based on illusion engine |
CN111915737A (en) * | 2020-08-11 | 2020-11-10 | 厦门长辉实业有限公司 | Human-object interaction system based on augmented reality |
CN111915737B (en) * | 2020-08-11 | 2024-03-01 | 厦门长辉实业有限公司 | Human-object interaction system based on augmented reality |
CN112348933A (en) * | 2020-11-18 | 2021-02-09 | 北京达佳互联信息技术有限公司 | Animation generation method and device, electronic equipment and storage medium |
CN112348933B (en) * | 2020-11-18 | 2023-10-31 | 北京达佳互联信息技术有限公司 | Animation generation method, device, electronic equipment and storage medium |
CN112619140A (en) * | 2020-12-18 | 2021-04-09 | 网易(杭州)网络有限公司 | Method and device for determining position in game and method and device for adjusting path |
CN112619140B (en) * | 2020-12-18 | 2024-04-26 | 网易(杭州)网络有限公司 | Method and device for determining position in game and method and device for adjusting path |
CN112634420A (en) * | 2020-12-22 | 2021-04-09 | 北京达佳互联信息技术有限公司 | Image special effect generation method and device, electronic equipment and storage medium |
CN112634420B (en) * | 2020-12-22 | 2024-04-30 | 北京达佳互联信息技术有限公司 | Image special effect generation method and device, electronic equipment and storage medium |
CN112945222A (en) * | 2021-01-27 | 2021-06-11 | 杭州钱航船舶修造有限公司 | Ship driving-assistant glasses image fusion method and system based on field direction |
CN112884888B (en) * | 2021-03-23 | 2024-06-04 | 中德(珠海)人工智能研究院有限公司 | Exhibition display method, system, equipment and medium based on mixed reality |
CN112884888A (en) * | 2021-03-23 | 2021-06-01 | 中德(珠海)人工智能研究院有限公司 | Exhibition display method, system, equipment and medium based on mixed reality |
CN113347373B (en) * | 2021-06-16 | 2022-06-03 | 潍坊幻视软件科技有限公司 | Image processing method for making special-effect video in real time through AR space positioning |
CN113347373A (en) * | 2021-06-16 | 2021-09-03 | 潍坊幻视软件科技有限公司 | Image processing method for making special-effect video in real time through AR space positioning |
CN113542609B (en) * | 2021-07-20 | 2024-05-07 | 大连东软信息学院 | Graduation concept system based on augmented reality technology and panoramic video technology and use method |
CN113542609A (en) * | 2021-07-20 | 2021-10-22 | 大连东软信息学院 | Graduation memory system based on augmented reality technology and panoramic video technology and use method |
CN113643443A (en) * | 2021-10-13 | 2021-11-12 | 潍坊幻视软件科技有限公司 | Positioning system for AR/MR technology |
CN114760458A (en) * | 2022-04-28 | 2022-07-15 | 中南大学 | Method for synchronizing tracks of virtual camera and real camera of high-reality augmented reality studio |
CN114760458B (en) * | 2022-04-28 | 2023-02-24 | 中南大学 | Method for synchronizing tracks of virtual camera and real camera of high-reality augmented reality studio |
CN116320364B (en) * | 2023-05-25 | 2023-08-01 | 四川中绳矩阵技术发展有限公司 | Virtual reality shooting method and display method based on multi-layer display |
CN116320364A (en) * | 2023-05-25 | 2023-06-23 | 四川中绳矩阵技术发展有限公司 | Virtual reality shooting method and display method based on multi-layer display |
CN116896608B (en) * | 2023-09-11 | 2023-12-12 | 山东省地震局 | Virtual seismic scene presentation system |
CN116896608A (en) * | 2023-09-11 | 2023-10-17 | 山东省地震局 | Virtual earthquake scene playing system based on mobile equipment propagation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111161422A (en) | Model display method for enhancing virtual scene implementation | |
US11610331B2 (en) | Method and apparatus for generating data for estimating three-dimensional (3D) pose of object included in input image, and prediction model for estimating 3D pose of object | |
US9665984B2 (en) | 2D image-based 3D glasses virtual try-on system | |
US9654734B1 (en) | Virtual conference room | |
Tian et al. | Handling occlusions in augmented reality based on 3D reconstruction method | |
CN108389247A (en) | For generating the true device and method with binding threedimensional model animation | |
CN112950751B (en) | Gesture action display method and device, storage medium and system | |
CN110568923A (en) | unity 3D-based virtual reality interaction method, device, equipment and storage medium | |
US9183654B2 (en) | Live editing and integrated control of image-based lighting of 3D models | |
CN107274491A (en) | A kind of spatial manipulation Virtual Realization method of three-dimensional scenic | |
CN108133454B (en) | Space geometric model image switching method, device and system and interaction equipment | |
CN113936121B (en) | AR label setting method and remote collaboration system | |
CN108986232A (en) | A method of it is shown in VR and AR environment picture is presented in equipment | |
CN106980378A (en) | Virtual display methods and system | |
CN114092670A (en) | Virtual reality display method, equipment and storage medium | |
CN116958344A (en) | Animation generation method and device for virtual image, computer equipment and storage medium | |
Schönauer et al. | Wide area motion tracking using consumer hardware | |
Adithya et al. | Augmented reality approach for paper map visualization | |
Valentini | Natural interface in augmented reality interactive simulations: This paper demonstrates that the use of a depth sensing camera that helps generate a three-dimensional scene and track user's motion could enhance the realism of the interactions between virtual and physical objects | |
Abdelnaby et al. | Augmented reality maintenance training with intel depth camera | |
Eskandari et al. | Diminished reality in architectural and environmental design: Literature review of techniques, applications, and challenges | |
Carozza et al. | An immersive hybrid reality system for construction training | |
CN114067046A (en) | Method and system for reconstructing and displaying hand three-dimensional model by single picture | |
CN111862338A (en) | Display method and device for simulating glasses wearing image | |
Akinjala et al. | Animating human movement & gestures on an agent using Microsoft kinect |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200515 |