WO2019041614A1 - Head-mounted immersive virtual reality display device and immersive virtual reality display method - Google Patents
Head-mounted immersive virtual reality display device and immersive virtual reality display method Download PDFInfo
- Publication number
- WO2019041614A1 WO2019041614A1 PCT/CN2017/114192 CN2017114192W WO2019041614A1 WO 2019041614 A1 WO2019041614 A1 WO 2019041614A1 CN 2017114192 W CN2017114192 W CN 2017114192W WO 2019041614 A1 WO2019041614 A1 WO 2019041614A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- head
- virtual reality
- optical system
- mounted display
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/344—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0138—Head-up displays characterised by optical features comprising image capture systems, e.g. camera
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/014—Head-up displays characterised by optical features comprising information/image processing systems
Definitions
- the present invention relates to a three-dimensional display technology neighborhood, and in particular to an immersive virtual reality head-mounted display device and an immersive virtual reality display method.
- Virtual reality technology is a computer simulation system that can create and experience virtual worlds. This technology is mostly used for entertainment, such as immersive video games. But virtual reality technology is also very important in other fields, such as medicine, military aerospace and so on.
- the current mainstream virtual reality devices on the market such as HTC Vive, Oculus Rift, etc., do not reach the best field of view (120°) of the head-mounted display. Therefore, it is impossible to maximize user immersion.
- the devices used for head tracking and attitude detection are too complicated and require a large space to support the operation of the device.
- Most of the virtual reality devices on the mobile side use the gyroscope to achieve head tracking, and the tracking accuracy cannot be guaranteed.
- the object of the present invention is to provide an immersive virtual reality head-mounted display device and an immersive virtual reality display method, which can ensure the head tracking accuracy while realizing a large field of view, and has little space for operation.
- an immersive virtual reality head mounted display device includes a head tracking and attitude detecting system, a head mounted display optical system, an image dividing system, and an encoding and decoding system. And a processing system; the head tracking and attitude detecting system is used for capturing an image of a peripheral environment of the human eye, and the encoding and decoding system decodes the image signal of the tracking and attitude detecting system and transmits it to the processing system, and processes and decodes the image through the processing system.
- the system is encoded and transmitted to the image segmentation system, and the image segmentation system divides the received image information into a plurality of display images and displays them through the head-mounted display optical system.
- the head tracking and attitude detection system is used to capture the peripheral environment scene image and obtain the head position and posture information
- the processing system may be an external computer, and the computer superimposes the peripheral environment scene image information and the virtual video information to realize the real world and At the same time, the computer adjusts the virtual video information according to the change of the head position and the posture information so that the peripheral environment scene image is more integrated in reality.
- the head tracking and attitude detecting system includes a live view unit for actual shooting of a peripheral scene placed at a left and right eye position of a person, a panoramic camera unit for panoramic shooting of a peripheral scene, and a position and attitude sensor.
- the environment panorama obtained by panoramic camera shooting is used as a base on which peripheral scenes captured by two conventional cameras located at the left and right eyes are superimposed. Because the panoramic camera unit has a larger shooting range, but the resolution is lower, the conventional camera has a smaller shooting range and a higher resolution. The effect of such overlap is that in a large image with a lower resolution, a part of the area (i.e., the normal camera shooting area, that is, the normal eye observation area) is higher than the surrounding resolution. Adding a surrounding environment with a low resolution can deepen the user's sense of presence.
- the above video overlay process is implemented by an FPGA in an image segmentation system in a head mounted display device.
- the video signals from the conventional camera and the panoramic camera unit are combined into one video signal and encoded, and connected to the encoding and decoding system for decoding by a cable, and the decoded video is input to the processing system through the USB3.0 port for calculation.
- the head-mounted display optical system includes two sets of monocular optical systems arranged symmetrically in a left-right direction;
- the monocular optical system includes a lens group sequentially disposed in front of the human eye and placed in the lens group a rear display;
- the lens group includes at least one set of Fresnel lenses, each set of Fresnel lenses consisting of two resin plates engraved with a Fresnel lens sawtooth structure;
- the display is two, respectively parallel to the corresponding resin plate
- the angle ⁇ between the apex of the splicing of the two resin sheets to the center of the eyeball and the optical axis perpendicular to the eyeball is greater than 0°.
- Fresnel lens group is used to magnify the virtual image of the image displayed on the display.
- Each group of Fresnel lens is spliced by two resin plates engraved with Fresnel lens sawtooth structure. The reason is that the spliced Fresnel lens is used.
- the group replaces the ordinary lens in order to make the left and right fields of view more convenient when splicing. If an ordinary lens group is used, since the shape is generally circular, seamless stitching cannot be achieved when the left and right imaging systems are spliced.
- the two resin plates are spliced, and since the shape is rectangular, there is no gap after the two rectangles are spliced.
- the image displayed by the display is not visible to the human eye at this time.
- the final complete binocular system contains the above two sets of monocular systems. If this angle requirement is not set, then when the human eye observes infinity, both eyes fall on the Fresnel lens stitching of the two systems. At this point, both eyes cannot get the image from the display, so this location becomes a blind spot. When this angle is set, due to binocular convergence, when one eye looks at the splicing of the Fresnel lens, the other eye will inevitably not look at its splicing place, so there is no blind spot.
- a more specific solution is that the two resin sheets are spliced into an L-shape; the lens group includes two sets of Fresnel lenses arranged in parallel; the displays located in the left front and the right front of the human eye are respectively formed by the visual optical system to enlarge two
- the virtual image has an overlap region ⁇ in space, and within the overlap region ⁇ , the image brightness of the two displays gradually darkens from the overlap of the display to the edge.
- the image brightness of the two displays gradually dims from the overlap of the display to the edge for the consistency of the final composite image brightness.
- each L-type Fresnel lens, each arm and the rear optical lens are composed A visual optical system, and the parameters of the optical system are as follows:
- the horizontal and vertical fields of view are greater than 80°, the medial half angle is 45°, and the overlap area is 90°;
- the two-arm visual optical system is assembled into a monocular optical system with a field of view greater than 120°.
- the two arms of the L-type Fresnel lens are each a set of optical systems with an angle of view above 80°, and after splicing, they become an optical system with a field of view of 120° or more.
- the monocular field of view of the head-mounted display optical system is greater than 120°
- the binocular field of view is greater than 150°
- the overlap portion is greater than 90°.
- the immersive virtual reality display method provided by the present invention comprises the following steps:
- the step 1) comprises: taking a panorama of the peripheral environment as a base by using a panoramic camera, and superimposing a peripheral environment image captured by a conventional camera on the substrate.
- the head tracking and posture detection of the present invention are realized by a combination of a panoramic ring imaging system and a nine-axis motion sensor such as a gyroscope. Therefore, the accuracy of head tracking and attitude detection can be improved without increasing the space occupied by the device;
- the wearing optical system of the present invention adopts a multi-display viewing angle splicing method, so that the viewing angle of the virtual display device can be improved to achieve the optimal viewing angle of the head mounted display, thereby improving user immersion;
- the present invention utilizes a combination of a panoramic camera system and a real-life camera system to obtain a peripheral real scene, so that it is possible to obtain as large a peripheral real scene as possible without reducing the resolution, thereby strengthening The user's sense of presence.
- FIG. 1 is a schematic structural view of an embodiment of the present invention
- FIG. 2 is a schematic structural view of an immersive head mounted display optical system according to an embodiment of the present invention.
- FIG. 3 is a detailed structural view of one of the arm visual optical systems of the immersive head mounted display optical system according to the embodiment of the present invention.
- the immersive virtual reality head-mounted display device includes a head tracking and attitude detecting system 1, a head-mounted display optical system 2, an image segmentation system 3, an encoding and decoding system 4, and a processing system 5.
- the encoding and decoding system 4 first decodes the signal of the head tracking and attitude detecting system 1 and transmits it to the processing system 5. After being processed by the processing system 5, it is encoded by the encoding and decoding system 4 into a signal that can be recognized by the image segmentation system 3. To the image segmentation system 3, the image segmentation system 3 restores the image coded information drawn by the processing system 5, and divides it into a plurality of display images, which are transmitted to the head mounted display optical system 1 for display.
- the processing system 5 includes an external computer.
- the head tracking and attitude detecting system 2 includes a live view unit, a panoramic camera unit, and a position and attitude sensor.
- the real-life camera unit includes two conventional cameras respectively placed at the left and right eye positions of the person for actual shooting of the peripheral scene, and provides a real-life image for the virtual integration of the video by the head-mounted display device.
- the panoramic camera unit includes a panoramic imaging unit for panoramic shooting of peripheral scenes to provide an image for the position and posture of the head mounted display device.
- the position and attitude sensors can be gyroscopes.
- the head-mounted display optical system 2 includes two sets of monocular optical systems placed symmetrically left and right, wherein a monocular optical system 05 is placed in front of the left eye 07, and a monocular optical system is placed in front of the right eye 08. .
- Each monocular optical system includes a lens group that is placed in front of the human eye in turn, and is placed on After the lens group, two displays placed in the left front and the right front of the human eye, respectively placed on the left front and the right front of the left eye 07 are the OLED display 01 and the OLED display 02, respectively, placed in the left front of the right eye 08 And the right front are respectively OLED display 03 and OLED display 04, wherein each OLED screen has a resolution of 1000x1200, and a large field of view is synthesized by two OLEDs, each of which has a resolution of 2000x1200.
- the lens group includes at least one L-type Fresnel lens, and the first lens near the human eye employs an L-type Fresnel lens.
- the lens group includes an L-type Fresnel lens and an aspherical lens, wherein the L-type Fresnel lens close to the human eye is composed of two resin plates connected at both ends, and each of the two surfaces of each resin plate has at least one side The surface is engraved with a sawtooth structure of a Fresnel lens, and the angle ⁇ between the apex of the two resin plates connecting the L-type Fresnel lens to the center of the eyeball and the optical axis perpendicular to the eyeball is greater than 0°.
- both eyes look at the splicing of the two resin plates of the L-type Fresnel lens at the same time, thereby causing blind spots.
- Each arm of the L-type Fresnel lens and the optical lens behind it form a complete visual optical system.
- the specific structure can be as shown in Figure 3. It contains a Fresnel lens and two aspheric surfaces close to the human eye. Plastic lens lens. The left side is the position of the human eye and the right side is the position of the display.
- the display uses an OLED display with a pixel size of approximately 50 um and a stripline pair of 10 lp/mm.
- the OLED display screens located on the left and right sides of the same eye respectively form an enlarged virtual image by L-type Fresnel lenses, and the two virtual images have overlapping regions ⁇ in space.
- the image brightness of the two displays gradually darkens from the overlap of the display to the edge, so that after the images of the overlapping regions are superimposed, the total brightness of the two displays is the same as the brightness of the non-overlapping regions of the two displays.
- Each of the L-type Fresnel lenses described above, each of which is followed by an optical lens, constitutes a complete visual optical system, and the parameters of the optical system are as follows:
- the horizontal and vertical fields of view are greater than 80°, the medial half angle is 45°, and the overlap area is guaranteed to be 90°; the visual optical system of the two arms is combined into a monocular optical system with a field of view greater than 120°.
- the two arms of the L-type Fresnel lens are each a set of optical systems with an angle of view above 80°, which becomes a splicing Optical system above 120° field of view.
- the head tracking and attitude detecting system 1 captures a peripheral real-life image, including a panoramic camera system and a video taken by a conventional camera placed at the left and right eye positions, and uses an FPGA in the encoding and decoding system 4 to synthesize a video signal.
- the encoding is performed by cable to the encoding and decoding system on the processing system 5 side for decoding, and the decoded video is input to the processing system 5 through the USB 3.0 port for calculation, to obtain peripheral real scene acquisition, and head tracking and attitude detection. .
- the processing system 5 superimposes the virtual video signal to be displayed on the obtained actual video signal, encodes it into a format recognizable by the image segmentation system 3 by the encoding and decoding system 4, and transmits it to the image segmentation system 3 through the cable.
- the image segmentation system 3 restores the image coded information drawn by the processing system 5 and divides it into a plurality of display images, which are connected to the driver of the OLED image source in the immersion head mounted display optical system 2.
- the human eye can obtain a virtual image superimposed on the peripheral real environment by wearing the display optical system 2.
- the image signal is drawn by a computer graphics card at a resolution of 2000x1200, and is split into two 1000x1200 resolution images by the segmentation system 3.
- Aberration correction is achieved by a combination of a Fresnel lens and an aspherical lens.
- Large field of view immersive display through image fusion. The final effect is that the field of view of a single eye is greater than 120 degrees, the field of view of both eyes is greater than 150 degrees, and the overlap is greater than 90 degrees.
- the immersive virtual reality display method includes the following steps:
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
Abstract
A head-mounted immersive virtual reality display device and an immersive virtual reality display method. The head-mounted display device comprises a head tracking and orientation detection system (1), a head-mounted optical display system (2), an image segmentation system (3), an encoding and decoding system (4), and a processing system (5). The head tracking and orientation detection system (1) is used to capture an image of a peripheral environment of a human eye. The encoding and decoding system (4) decodes an image signal of the head tracking and orientation detection system (1) before transmitting the decoded image signal to the processing system (5). Once the same has been processed by the processing system (5), the encoding and decoding system (4) performs encoding, and image information is transmitted to the image segmentation system (3). The image segmentation system (3) divides the received image information into a plurality of display images, and then the head-mounted optical display system (2) performs display. The head tracking and orientation detection system (1) captures an image of a peripheral environment and acquires head position and orientation information, and superimposes virtual video information over the image information of the peripheral environment, thereby combining the real and virtual world.
Description
本发明涉及三维显示技术邻域,具体地说,涉及一种沉浸式虚拟现实头戴显示装置和沉浸式虚拟现实显示方法。The present invention relates to a three-dimensional display technology neighborhood, and in particular to an immersive virtual reality head-mounted display device and an immersive virtual reality display method.
虚拟现实技术是一种可以创建和体验虚拟世界的计算机仿真系统。这项技术多被用于娱乐,如沉浸式视频游戏。但是虚拟现实技术在其他领域也有十分重要的意义,如医学、军事航天等。Virtual reality technology is a computer simulation system that can create and experience virtual worlds. This technology is mostly used for entertainment, such as immersive video games. But virtual reality technology is also very important in other fields, such as medicine, military aerospace and so on.
目前市面上主流的虚拟现实设备,如HTC Vive,Oculus Rift等,其视场角均未达到头戴显示器的最佳视场(120°)。因此无法最大限度地提高用户沉浸感。The current mainstream virtual reality devices on the market, such as HTC Vive, Oculus Rift, etc., do not reach the best field of view (120°) of the head-mounted display. Therefore, it is impossible to maximize user immersion.
且如HTC Vive,其用于头部跟踪和姿态检测的设备太过繁杂,而且需要一个较大的空间来支持设备的运作。而大部分移动端的虚拟现实设备,利用陀螺仪实现头部跟踪,无法保证跟踪精度。And like HTC Vive, the devices used for head tracking and attitude detection are too complicated and require a large space to support the operation of the device. Most of the virtual reality devices on the mobile side use the gyroscope to achieve head tracking, and the tracking accuracy cannot be guaranteed.
发明内容Summary of the invention
本发明的目的为提供一种沉浸式虚拟现实头戴显示装置和沉浸式虚拟现实显示方法,在实现大视场的同时,可以保证头部跟踪精度,且运作时使用空间不大。The object of the present invention is to provide an immersive virtual reality head-mounted display device and an immersive virtual reality display method, which can ensure the head tracking accuracy while realizing a large field of view, and has little space for operation.
为了实现上述目的,本发明提供的沉浸式虚拟现实头戴显示装置包括头部跟踪与姿态检测系统、头戴显示光学系统、图像分割系统、编码与解码系统以
及处理系统;头部跟踪与姿态检测系统用于拍摄人眼外围环境的图像,编码与解码系统将跟踪与姿态检测系统的图像信号解码后传输到处理系统,经处理系统处理后通过编码与解码系统编码后传输到所述图像分割系统,图像分割系统将接收到的图像信息分割成多个显示器图像后通过头戴显示光学系统进行显示。In order to achieve the above object, an immersive virtual reality head mounted display device provided by the present invention includes a head tracking and attitude detecting system, a head mounted display optical system, an image dividing system, and an encoding and decoding system.
And a processing system; the head tracking and attitude detecting system is used for capturing an image of a peripheral environment of the human eye, and the encoding and decoding system decodes the image signal of the tracking and attitude detecting system and transmits it to the processing system, and processes and decodes the image through the processing system. The system is encoded and transmitted to the image segmentation system, and the image segmentation system divides the received image information into a plurality of display images and displays them through the head-mounted display optical system.
上述技术方案中,通过头部跟踪与姿态检测系统拍摄外围环境场景图像并获取头部位置及姿态信息,处理系统可以是外接计算机,计算机将外围环境场景图像信息与虚拟视频信息叠加实现现实世界与虚拟世界的结合,同时,计算机根据头部位置及姿态信息的变化调整虚拟视频信息以便于现实中外围环境场景图像更加融合。In the above technical solution, the head tracking and attitude detection system is used to capture the peripheral environment scene image and obtain the head position and posture information, and the processing system may be an external computer, and the computer superimposes the peripheral environment scene image information and the virtual video information to realize the real world and At the same time, the computer adjusts the virtual video information according to the change of the head position and the posture information so that the peripheral environment scene image is more integrated in reality.
具体的方案为头部跟踪与姿态检测系统包括置于人的左右眼位置用于外围场景的实际拍摄的实景摄像单元、用于外围场景的全景拍摄的全景摄像单元以及位置和姿态传感器。A specific solution is that the head tracking and attitude detecting system includes a live view unit for actual shooting of a peripheral scene placed at a left and right eye position of a person, a panoramic camera unit for panoramic shooting of a peripheral scene, and a position and attitude sensor.
利用全景摄像拍摄得到的环境全景作为基底,在其上叠加利用两个位于左右眼的常规摄像头拍摄得到的外围场景。因为全景摄像单元的拍摄范围较大,但分辨率较低,而常规摄像头的拍摄范围较小,分辨率较高。这样重叠之后的效果就是在一副分辨率较低的大图像中,有一部分区域(即常规摄像头拍摄区域,也即正常眼睛观察区域)比周围的分辨率高。加入分辨率不怎么高的周围环境可以加深用户的临场感。The environment panorama obtained by panoramic camera shooting is used as a base on which peripheral scenes captured by two conventional cameras located at the left and right eyes are superimposed. Because the panoramic camera unit has a larger shooting range, but the resolution is lower, the conventional camera has a smaller shooting range and a higher resolution. The effect of such overlap is that in a large image with a lower resolution, a part of the area (i.e., the normal camera shooting area, that is, the normal eye observation area) is higher than the surrounding resolution. Adding a surrounding environment with a low resolution can deepen the user's sense of presence.
以上视频叠加过程通过头戴显示装置中的图像分割系统中的FPGA实现。其将分别来自于常规摄像头和全景摄像单元的视频信号合成为一路视频信号并进行编码,通过电缆连接到编码与解码系统进行解码,解码后的视频通过USB3.0端口输入到处理系统进行计算,以实现外围实景的获得,以及头部跟踪和姿态检测。The above video overlay process is implemented by an FPGA in an image segmentation system in a head mounted display device. The video signals from the conventional camera and the panoramic camera unit are combined into one video signal and encoded, and connected to the encoding and decoding system for decoding by a cable, and the decoded video is input to the processing system through the USB3.0 port for calculation. To achieve the acquisition of peripheral real scene, as well as head tracking and attitude detection.
另一个具体的方案为头戴显示光学系统包括左右对称布置的两套单目光学系统;单目光学系统包括依次设置在人眼前方的透镜组以及放置在透镜组之
后的显示器;透镜组包括至少一组菲涅尔透镜,每组菲涅尔透镜由两块刻有菲涅尔透镜锯齿结构的树脂板拼接构成;显示器为两块,分别与对应的树脂板平行;两块树脂板的拼接处的顶点到眼球中心的连线与垂直于眼球的光轴之间的夹角β大于0°。Another specific solution is that the head-mounted display optical system includes two sets of monocular optical systems arranged symmetrically in a left-right direction; the monocular optical system includes a lens group sequentially disposed in front of the human eye and placed in the lens group
a rear display; the lens group includes at least one set of Fresnel lenses, each set of Fresnel lenses consisting of two resin plates engraved with a Fresnel lens sawtooth structure; the display is two, respectively parallel to the corresponding resin plate The angle β between the apex of the splicing of the two resin sheets to the center of the eyeball and the optical axis perpendicular to the eyeball is greater than 0°.
菲涅尔透镜组用于对显示器放映的图像成放大的虚像,每组菲涅尔透镜由两块刻有菲涅尔透镜锯齿结构的树脂板拼接而成,之所以用拼接的菲涅尔透镜组代替普通的透镜,是为了使左右视场在拼接时更加方便。如果用的是普通的透镜组,由于其外形一般为圆形,在左右两套成像系统拼接时无法做到无缝拼接。而用两块树脂板拼接构成,由于其外形为矩形,两个矩形拼接后不存在缝隙。Fresnel lens group is used to magnify the virtual image of the image displayed on the display. Each group of Fresnel lens is spliced by two resin plates engraved with Fresnel lens sawtooth structure. The reason is that the spliced Fresnel lens is used. The group replaces the ordinary lens in order to make the left and right fields of view more convenient when splicing. If an ordinary lens group is used, since the shape is generally circular, seamless stitching cannot be achieved when the left and right imaging systems are spliced. The two resin plates are spliced, and since the shape is rectangular, there is no gap after the two rectangles are spliced.
设置两块树脂板拼接处的顶点到眼球中心的连线,与垂直于眼球的光轴之间的夹角大于0°,这么做的原因如下:Set the line connecting the apex of the two resin plates to the center of the eyeball, and the angle between the axis perpendicular to the eyeball is greater than 0°. The reasons for this are as follows:
在单目系统中,如果眼睛恰好看向两块树脂板的拼接处,由于拼接处没有成像作用,因此这个时候人眼看不到显示器放映的图像。而最后完整的双目系统包含了如上两套单目系统,如果不设置这一角度要求,那么当人眼观察无穷远处时,双目都恰好落在两套系统各自的菲涅尔透镜拼接处,此时双目都无法获取来自显示器的图像,因此这一个位置成为了盲区。而设置了这一角度后,由于双目辐辏,当一个眼睛看向菲涅尔透镜的拼接处时,另一只眼睛必然不会看向它的拼接处,故不存在观察盲区。In the monocular system, if the eye just looks at the splicing of the two resin sheets, since the splicing has no imaging effect, the image displayed by the display is not visible to the human eye at this time. The final complete binocular system contains the above two sets of monocular systems. If this angle requirement is not set, then when the human eye observes infinity, both eyes fall on the Fresnel lens stitching of the two systems. At this point, both eyes cannot get the image from the display, so this location becomes a blind spot. When this angle is set, due to binocular convergence, when one eye looks at the splicing of the Fresnel lens, the other eye will inevitably not look at its splicing place, so there is no blind spot.
更具体的方案为两块所述的树脂板拼接为L型;透镜组包括平行布置的两组菲涅尔透镜;位于人眼左前方和右前方的显示器通过目视光学系统分别形成放大的两虚像在空间中具有交叠区域δ,在交叠区域δ内,两显示器的图像亮度从显示器交叠处向边缘逐渐变暗。A more specific solution is that the two resin sheets are spliced into an L-shape; the lens group includes two sets of Fresnel lenses arranged in parallel; the displays located in the left front and the right front of the human eye are respectively formed by the visual optical system to enlarge two The virtual image has an overlap region δ in space, and within the overlap region δ, the image brightness of the two displays gradually darkens from the overlap of the display to the edge.
也可根据需要设置成其它数量的菲涅尔透镜组。两显示器的图像亮度从显示器交叠处向边缘逐渐变暗是为了最后合成图像亮度的一致性。It can also be set to other numbers of Fresnel lens groups as needed. The image brightness of the two displays gradually dims from the overlap of the display to the edge for the consistency of the final composite image brightness.
更具体的方案为每个L型菲涅尔透镜,其每一臂与其后的光学透镜均构成
一个目视光学系统,且光学系统的参数范围如下:A more specific solution is that each L-type Fresnel lens, each arm and the rear optical lens are composed
A visual optical system, and the parameters of the optical system are as follows:
水平与垂直视场大于80°,内侧半角为45°,重叠区域为90°;The horizontal and vertical fields of view are greater than 80°, the medial half angle is 45°, and the overlap area is 90°;
两臂的目视光学系统拼成一个单目光学系统,其视场大于120°。The two-arm visual optical system is assembled into a monocular optical system with a field of view greater than 120°.
这样L型菲涅尔透镜两臂各自为一套80°视场角以上的光学系统,拼接之后就成为了一个120°视场以上的光学系统。Thus, the two arms of the L-type Fresnel lens are each a set of optical systems with an angle of view above 80°, and after splicing, they become an optical system with a field of view of 120° or more.
另一个更具体的方案为头戴显示光学系统的单眼视场大于120°,双眼视场大于150°,重叠部分大于90°。Another more specific solution is that the monocular field of view of the head-mounted display optical system is greater than 120°, the binocular field of view is greater than 150°, and the overlap portion is greater than 90°.
本发明提供的沉浸式虚拟现实显示方法包括以下步骤:The immersive virtual reality display method provided by the present invention comprises the following steps:
1)获得外围环境实景图像;1) Obtain a real-life image of the surrounding environment;
2)将所述外围环境实景图像与虚拟视频图像叠加获得实际播放的视频信号;2) superimposing the peripheral environment real image and the virtual video image to obtain a actually played video signal;
3)对所述实际播放的视频信号进行分割,并在多台显示器上进行放映;3) dividing the actually played video signal and performing screening on multiple displays;
4)对所述外围环境实景图像进行分析获得位置和姿态信息;4) analyzing the peripheral environment real image to obtain position and attitude information;
5)通过所述位置和姿态信息对虚拟视频图像进行调整使其与所述外围环境实景图像进行融合。5) Adjusting the virtual video image by the position and posture information to fuse with the peripheral environment real image.
具体的方案为步骤1)包括:利用全景摄像头拍摄外围环境全景作为基底,在该基底上叠加利用常规摄像头拍摄得到的外围环境图像。The specific solution is that the step 1) comprises: taking a panorama of the peripheral environment as a base by using a panoramic camera, and superimposing a peripheral environment image captured by a conventional camera on the substrate.
与现有技术相比,本发明的有益效果为:Compared with the prior art, the beneficial effects of the present invention are:
(1)本发明的头部跟踪和姿态检测是通过全景环带成像系统和陀螺仪等九轴运动传感器相结合实现。因此可以在不增大设备占用空间的前提下,提高头部跟踪和姿态检测的精确度;(1) The head tracking and posture detection of the present invention are realized by a combination of a panoramic ring imaging system and a nine-axis motion sensor such as a gyroscope. Therefore, the accuracy of head tracking and attitude detection can be improved without increasing the space occupied by the device;
(2)本发明的头戴光学系统采用多显示器视角拼接的方法,因此可以提高虚拟显示设备的视场角,以达到头戴显示器的最佳视角,从而提高用户的沉浸感;(2) The wearing optical system of the present invention adopts a multi-display viewing angle splicing method, so that the viewing angle of the virtual display device can be improved to achieve the optimal viewing angle of the head mounted display, thereby improving user immersion;
(3)本发明利用全景摄像系统和实景摄像系统相结合的方式获得外围实景,因此可以在不降低分辨率的前提下,获得尽可能大的外围实景,从而加强
用户的临场感。(3) The present invention utilizes a combination of a panoramic camera system and a real-life camera system to obtain a peripheral real scene, so that it is possible to obtain as large a peripheral real scene as possible without reducing the resolution, thereby strengthening
The user's sense of presence.
图1为本发明实施例的结构示意图;1 is a schematic structural view of an embodiment of the present invention;
图2为本发明实施例的沉浸式头戴显示光学系统的结构示意图;2 is a schematic structural view of an immersive head mounted display optical system according to an embodiment of the present invention;
图3为本发明实施例的沉浸式头戴显示光学系统的其中一臂目视光学系统的具体结构图。3 is a detailed structural view of one of the arm visual optical systems of the immersive head mounted display optical system according to the embodiment of the present invention.
以下结合实施例及其附图对本发明作进一步说明。The invention will be further described below in conjunction with the embodiments and the accompanying drawings.
实施例Example
参见图1和图2,沉浸式虚拟现实头戴显示装置包括头部跟踪与姿态检测系统1、头戴显示光学系统2、图像分割系统3、编码与解码系统4和处理系统5。其中编码与解码系统4首先将头部跟踪与姿态检测系统1的信号解码后传到处理系统5,经过处理系统5处理后通过编码与解码系统4编码成图像分割系统3能识别的信号再传给图像分割系统3,图像分割系统3将处理系统5绘制的图像编码信息还原,并分割为多个显示器图像,并传输到头戴显示光学系统1进行显示,处理系统5包括一外接计算机。Referring to FIGS. 1 and 2, the immersive virtual reality head-mounted display device includes a head tracking and attitude detecting system 1, a head-mounted display optical system 2, an image segmentation system 3, an encoding and decoding system 4, and a processing system 5. The encoding and decoding system 4 first decodes the signal of the head tracking and attitude detecting system 1 and transmits it to the processing system 5. After being processed by the processing system 5, it is encoded by the encoding and decoding system 4 into a signal that can be recognized by the image segmentation system 3. To the image segmentation system 3, the image segmentation system 3 restores the image coded information drawn by the processing system 5, and divides it into a plurality of display images, which are transmitted to the head mounted display optical system 1 for display. The processing system 5 includes an external computer.
头部跟踪与姿态检测系统2包括实景摄像单元、全景摄像单元以及位置和姿态传感器。实景摄像单元包括分别置于人的左右眼位置的两个常规摄像头,用于外围场景的实际拍摄,为头戴显示装置实现视频的虚实融合提供实景图像。全景摄像单元包括一个全景摄像部件,用于外围场景的全景拍摄,为头戴显示装置的位置和姿态提供图像。位置和姿态传感器可以为陀螺仪。The head tracking and attitude detecting system 2 includes a live view unit, a panoramic camera unit, and a position and attitude sensor. The real-life camera unit includes two conventional cameras respectively placed at the left and right eye positions of the person for actual shooting of the peripheral scene, and provides a real-life image for the virtual integration of the video by the head-mounted display device. The panoramic camera unit includes a panoramic imaging unit for panoramic shooting of peripheral scenes to provide an image for the position and posture of the head mounted display device. The position and attitude sensors can be gyroscopes.
本实施例中,头戴显示光学系统2包括左右对称放置的两套单目光学系统,其中,左眼07前方放置的是单目光学系统05,右眼08前方放置的是单目光学系统06。每个单目光学系统包括依次放置在人眼前的透镜组,以及放置于
透镜组之后的,分别置于人眼左前方和右前方的两个显示器,置于左眼07左前方和右前方的分别为OLED显示屏01和OLED显示屏02,置于右眼08左前方和右前方的分别为OLED显示屏03和OLED显示屏04,其中每个OLED屏幕的分辨率为1000x1200,通过两块OLED合成大视场,每个合成的分辨率2000x1200。透镜组包含至少一个L型菲涅尔透镜,且靠近人眼的第一块透镜采用L型菲涅尔透镜。In the present embodiment, the head-mounted display optical system 2 includes two sets of monocular optical systems placed symmetrically left and right, wherein a monocular optical system 05 is placed in front of the left eye 07, and a monocular optical system is placed in front of the right eye 08. . Each monocular optical system includes a lens group that is placed in front of the human eye in turn, and is placed on
After the lens group, two displays placed in the left front and the right front of the human eye, respectively placed on the left front and the right front of the left eye 07 are the OLED display 01 and the OLED display 02, respectively, placed in the left front of the right eye 08 And the right front are respectively OLED display 03 and OLED display 04, wherein each OLED screen has a resolution of 1000x1200, and a large field of view is synthesized by two OLEDs, each of which has a resolution of 2000x1200. The lens group includes at least one L-type Fresnel lens, and the first lens near the human eye employs an L-type Fresnel lens.
透镜组包含一个L型菲涅尔透镜和一个非球面透镜,其中,靠近人眼的L型菲涅尔透镜由两块树脂板两端相连构成,每块树脂板的各自两个表面至少一侧表面刻有菲涅尔透镜的锯齿结构,且构成L型菲涅尔透镜的两块树脂板连接处的顶点到眼球中心的连线与垂直于眼球的光轴之间的夹角β大于0°,以避免在观察无穷远处时,双目同时看向L型菲涅尔透镜两块树脂板的拼接处,从而出现盲区。The lens group includes an L-type Fresnel lens and an aspherical lens, wherein the L-type Fresnel lens close to the human eye is composed of two resin plates connected at both ends, and each of the two surfaces of each resin plate has at least one side The surface is engraved with a sawtooth structure of a Fresnel lens, and the angle β between the apex of the two resin plates connecting the L-type Fresnel lens to the center of the eyeball and the optical axis perpendicular to the eyeball is greater than 0°. In order to avoid observing the infinity, both eyes look at the splicing of the two resin plates of the L-type Fresnel lens at the same time, thereby causing blind spots.
L型菲涅尔透镜每一臂与其后的光学透镜均构成一个完整的目视光学系统,其具体结构可如图3所示,包含了靠近人眼的一个菲涅尔透镜和两个非球面塑胶镜片透镜。左侧为人眼位置,右侧为显示器位置。显示采用OLED显示屏,像素大小约为50um,条纹线对数为10lp/mm。Each arm of the L-type Fresnel lens and the optical lens behind it form a complete visual optical system. The specific structure can be as shown in Figure 3. It contains a Fresnel lens and two aspheric surfaces close to the human eye. Plastic lens lens. The left side is the position of the human eye and the right side is the position of the display. The display uses an OLED display with a pixel size of approximately 50 um and a stripline pair of 10 lp/mm.
位于同一眼睛左右前方的OLED显示屏通过L型菲涅尔透镜分别形成放大的虚像,两个虚像在空间中有交叠区域δ。在交叠区域δ内,两显示器的图像亮度从显示器交叠处向边缘逐渐变暗,因此两台显示器在重叠区域的图像相叠加之后,其总亮度与两台显示器各自非重叠区域的亮度相同,便于最后合成图像亮度的一致性,由此实现左右视场的平滑拼接。The OLED display screens located on the left and right sides of the same eye respectively form an enlarged virtual image by L-type Fresnel lenses, and the two virtual images have overlapping regions δ in space. In the overlap region δ, the image brightness of the two displays gradually darkens from the overlap of the display to the edge, so that after the images of the overlapping regions are superimposed, the total brightness of the two displays is the same as the brightness of the non-overlapping regions of the two displays. To facilitate the consistency of the brightness of the final composite image, thereby achieving smooth stitching of the left and right fields of view.
如上所述的每个L型菲涅尔透镜,其每一臂与其后的光学透镜均构成一个完整的目视光学系统,且光学系统的参数范围如下:Each of the L-type Fresnel lenses described above, each of which is followed by an optical lens, constitutes a complete visual optical system, and the parameters of the optical system are as follows:
水平与垂直视场大于80°,内侧半角为45°,保证重叠区域为90°;两臂的目视光学系统拼成一个单目光学系统,其视场大于120°。这样L型菲涅尔透镜两臂各自为一套80°视场角以上的光学系统,拼接之后就成为了一个
120°视场以上的光学系统。The horizontal and vertical fields of view are greater than 80°, the medial half angle is 45°, and the overlap area is guaranteed to be 90°; the visual optical system of the two arms is combined into a monocular optical system with a field of view greater than 120°. Thus, the two arms of the L-type Fresnel lens are each a set of optical systems with an angle of view above 80°, which becomes a splicing
Optical system above 120° field of view.
具体的实施方式如下:The specific implementation is as follows:
首先,头部跟踪与姿态检测系统1拍摄得到外围实景图像,包括全景摄像系统和置于左右眼位置处的常规摄像头拍摄的视频,在编码与解码系统4中采用FPGA,合成为一路视频信号并进行编码,通过电缆连接到处理系统5侧的编码与解码系统进行解码,解码后的视频通过USB3.0端口输入到处理系统5进行计算,以实现外围实景的获得,以及头部跟踪和姿态检测。此时处理系统5将要显示的虚拟视频信号叠加于获得的实际视频信号上,通过编码与解码系统4将其编码成图像分割系统3能识别的格式,通过电缆传输到图像分割系统3。图像分割系统3将处理系统5绘制的图像编码信息还原,并分割为多个显示器图像,连接到浸式头戴显示光学系统2中的OLED图像源的驱动器上。此时人眼就可以通过头戴显示光学系统2获得叠加于外围真实环境下的虚拟图像。First, the head tracking and attitude detecting system 1 captures a peripheral real-life image, including a panoramic camera system and a video taken by a conventional camera placed at the left and right eye positions, and uses an FPGA in the encoding and decoding system 4 to synthesize a video signal. The encoding is performed by cable to the encoding and decoding system on the processing system 5 side for decoding, and the decoded video is input to the processing system 5 through the USB 3.0 port for calculation, to obtain peripheral real scene acquisition, and head tracking and attitude detection. . At this time, the processing system 5 superimposes the virtual video signal to be displayed on the obtained actual video signal, encodes it into a format recognizable by the image segmentation system 3 by the encoding and decoding system 4, and transmits it to the image segmentation system 3 through the cable. The image segmentation system 3 restores the image coded information drawn by the processing system 5 and divides it into a plurality of display images, which are connected to the driver of the OLED image source in the immersion head mounted display optical system 2. At this time, the human eye can obtain a virtual image superimposed on the peripheral real environment by wearing the display optical system 2.
在计算机输出视频信号的同时,还在不断接收来自头部跟踪与姿态检测系统的视频信号和传感器信号,以此来获得头部位置和姿态信息。While the computer outputs the video signal, the video signal and the sensor signal from the head tracking and attitude detecting system are continuously received to obtain the head position and posture information.
图像信号由计算机显卡绘制2000x1200分辨率的图像,通过分割系统3拆分为2个1000x1200分辨率的图像。通过菲涅尔透镜与非球面透镜的组合结构,实现像差校正。通过图像融合实现大视场沉浸式显示。最终达到的效果为:单眼的视场大于120度,双眼视场大于150度,重叠部分大于90度。The image signal is drawn by a computer graphics card at a resolution of 2000x1200, and is split into two 1000x1200 resolution images by the segmentation system 3. Aberration correction is achieved by a combination of a Fresnel lens and an aspherical lens. Large field of view immersive display through image fusion. The final effect is that the field of view of a single eye is greater than 120 degrees, the field of view of both eyes is greater than 150 degrees, and the overlap is greater than 90 degrees.
沉浸式虚拟现实显示方法包括以下步骤:The immersive virtual reality display method includes the following steps:
1)获得外围环境实景图像;具体包括:利用全景摄像头拍摄外围环境全景作为基底,在该基底上叠加利用常规摄像头拍摄得到的外围环境图像;1) Obtaining a real-life image of the peripheral environment; specifically: capturing a panoramic view of the peripheral environment as a base by using a panoramic camera, and superimposing a peripheral environment image captured by a conventional camera on the substrate;
2)将所述外围环境实景图像与虚拟视频图像叠加获得实际播放的视频信号;2) superimposing the peripheral environment real image and the virtual video image to obtain a actually played video signal;
3)对所述实际播放的视频信号进行分割,并在多台显示器上进行放映;
3) dividing the actually played video signal and performing screening on multiple displays;
4)对所述外围环境实景图像进行分析获得位置和姿态信息;4) analyzing the peripheral environment real image to obtain position and attitude information;
5)通过所述位置和姿态信息对虚拟视频图像进行调整使其与所述外围环境实景图像进行融合。
5) Adjusting the virtual video image by the position and posture information to fuse with the peripheral environment real image.
Claims (8)
- 一种沉浸式虚拟现实头戴显示装置,其特征在于:An immersive virtual reality head-mounted display device, characterized in that:包括头部跟踪与姿态检测系统、头戴显示光学系统、图像分割系统、编码与解码系统以及处理系统;Including head tracking and attitude detection system, head-mounted display optical system, image segmentation system, encoding and decoding system, and processing system;所述头部跟踪与姿态检测系统用于拍摄人眼外围环境的图像,所述编码与解码系统将所述跟踪与姿态检测系统的图像信号解码后传输到所述处理系统,经处理系统处理后通过编码与解码系统编码后传输到所述图像分割系统,图像分割系统将接收到的图像信息分割成多个显示器图像后通过头戴显示光学系统进行显示。The head tracking and attitude detecting system is configured to capture an image of a peripheral environment of a human eye, and the encoding and decoding system decodes the image signal of the tracking and attitude detecting system and transmits the image signal to the processing system, and after being processed by the processing system After being encoded by the encoding and decoding system and transmitted to the image segmentation system, the image segmentation system divides the received image information into a plurality of display images and displays them through the head-mounted display optical system.
- 根据权利要求1所述的沉浸式虚拟现实头戴显示装置,其特征在于:The immersive virtual reality head-mounted display device according to claim 1, wherein:所述的头部跟踪与姿态检测系统包括置于人的左右眼位置用于外围场景的实际拍摄的实景摄像单元、用于外围场景的全景拍摄的全景摄像单元以及位置和姿态传感器。The head tracking and attitude detecting system includes a live view unit that is placed at a left and right eye position of a person for actual shooting of a peripheral scene, a panoramic camera unit for panoramic shooting of a peripheral scene, and a position and attitude sensor.
- 根据权利要求1所述的沉浸式虚拟现实头戴显示装置,其特征在于:The immersive virtual reality head-mounted display device according to claim 1, wherein:所述的头戴显示光学系统包括左右对称布置的两套单目光学系统;The head mounted display optical system comprises two sets of monocular optical systems arranged symmetrically left and right;所述单目光学系统包括依次设置在人眼前方的透镜组以及放置在所述透镜组之后的显示器;The monocular optical system includes a lens group sequentially disposed in front of a human eye and a display placed behind the lens group;所述透镜组包括至少一组菲涅尔透镜,每组菲涅尔透镜由两块刻有菲涅尔透镜锯齿结构的树脂板拼接构成;The lens group includes at least one set of Fresnel lenses, each set of Fresnel lenses being composed of two resin plates engraved with a Fresnel lens sawtooth structure;所述的显示器为两块,分别与对应的树脂板平行;The display is two pieces, which are respectively parallel to the corresponding resin plates;两块所述树脂板的拼接处的顶点到眼球中心的连线与垂直于眼球的光轴之间的夹角β大于0°。The angle β between the line connecting the apex of the two resin sheets to the center of the eyeball and the optical axis perpendicular to the eyeball is greater than 0°.
- 根据权利要求3所述的沉浸式虚拟现实头戴显示装置,其特征在于:The immersive virtual reality head-mounted display device according to claim 3, wherein:所述的两块所述的树脂板拼接为L型; The two pieces of the resin sheets are spliced into an L shape;所述的透镜组包括平行布置的两组菲涅尔透镜;The lens group includes two sets of Fresnel lenses arranged in parallel;所述的位于人眼左前方和右前方的显示器通过目视光学系统分别形成放大的两虚像在空间中具有交叠区域δ,在交叠区域δ内,两显示器的图像亮度从显示器交叠处向边缘逐渐变暗。The display located at the left front and the right front of the human eye respectively forms an enlarged two virtual image by the visual optical system, and has an overlapping area δ in the space. In the overlapping area δ, the image brightness of the two displays overlaps from the display. Gradually darken towards the edges.
- 根据权利要求4所述的沉浸式虚拟现实头戴显示装置,其特征在于:The immersive virtual reality head-mounted display device according to claim 4, wherein:所述的每个L型菲涅尔透镜,其每一臂与其后的光学透镜均构成一个目视光学系统,且光学系统的参数范围如下:Each of the L-type Fresnel lenses, each of which is followed by an optical lens, constitutes a visual optical system, and the parameters of the optical system are as follows:水平与垂直视场大于80°,内侧半角为45°,重叠区域为90°;The horizontal and vertical fields of view are greater than 80°, the medial half angle is 45°, and the overlap area is 90°;两臂的目视光学系统拼成一个单目光学系统,其视场大于120°。The two-arm visual optical system is assembled into a monocular optical system with a field of view greater than 120°.
- 根据权利要求3所述的沉浸式虚拟现实头戴显示装置,其特征在于:The immersive virtual reality head-mounted display device according to claim 3, wherein:所述的头戴显示光学系统的单眼视场大于120°,双眼视场大于150°,重叠部分大于90°。The head-mounted display optical system has a monocular field of view greater than 120°, a binocular field of view greater than 150°, and an overlapping portion greater than 90°.
- 一种沉浸式虚拟现实显示方法,其特征在于,包括以下步骤:An immersive virtual reality display method, comprising the steps of:1)获得外围环境实景图像;1) Obtain a real-life image of the surrounding environment;2)将所述外围环境实景图像与虚拟视频图像叠加获得实际播放的视频信号;2) superimposing the peripheral environment real image and the virtual video image to obtain a actually played video signal;3)对所述实际播放的视频信号进行分割,并在多台显示器上进行放映;3) dividing the actually played video signal and performing screening on multiple displays;4)对所述外围环境实景图像进行分析获得位置和姿态信息;4) analyzing the peripheral environment real image to obtain position and attitude information;5)通过所述位置和姿态信息对虚拟视频图像进行调整使其与所述外围环境实景图像进行融合。5) Adjusting the virtual video image by the position and posture information to fuse with the peripheral environment real image.
- 根据权利要求7所述的沉浸式虚拟现实显示方法,其特征在于:The immersive virtual reality display method according to claim 7, wherein:步骤1)包括:利用全景摄像头拍摄外围环境全景作为基底,在该基底上叠加利用常规摄像头拍摄得到的外围环境图像。 Step 1) includes: taking a panoramic image of the peripheral environment as a base by using a panoramic camera, and superimposing a peripheral environment image captured by a conventional camera on the substrate.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710786863.X | 2017-09-04 | ||
CN201710786863.XA CN107462994A (en) | 2017-09-04 | 2017-09-04 | Immersive VR head-wearing display device and immersive VR display methods |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019041614A1 true WO2019041614A1 (en) | 2019-03-07 |
Family
ID=60551843
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/114192 WO2019041614A1 (en) | 2017-09-04 | 2017-12-01 | Head-mounted immersive virtual reality display device and immersive virtual reality display method |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107462994A (en) |
WO (1) | WO2019041614A1 (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019113887A1 (en) | 2017-12-14 | 2019-06-20 | 深圳市大疆创新科技有限公司 | Method, device and system for adjusting image, as well as computer readable storage medium |
CN108646925B (en) * | 2018-06-26 | 2021-01-05 | 朱光 | Split type head-mounted display system and interaction method |
CN109375764B (en) * | 2018-08-28 | 2023-07-18 | 北京凌宇智控科技有限公司 | Head-mounted display, cloud server, VR system and data processing method |
CN109358754B (en) * | 2018-11-02 | 2023-03-24 | 北京盈迪曼德科技有限公司 | Mixed reality head-mounted display system |
CN109086755B (en) * | 2018-11-07 | 2022-07-08 | 上海电气集团股份有限公司 | Virtual reality display method and system of rehabilitation robot based on image segmentation |
CN110139028B (en) * | 2019-03-25 | 2020-07-07 | 华为技术有限公司 | Image processing method and head-mounted display device |
CN110231712A (en) * | 2019-04-29 | 2019-09-13 | 成都理想境界科技有限公司 | A kind of augmented reality AR glasses |
CN111988533B (en) * | 2019-05-23 | 2022-07-22 | 川田科技株式会社 | Welding assistance method and device |
CN110349527B (en) * | 2019-07-12 | 2023-12-22 | 京东方科技集团股份有限公司 | Virtual reality display method, device and system and storage medium |
CN110361867A (en) * | 2019-08-20 | 2019-10-22 | 江苏数字鹰科技股份有限公司 | A kind of visible system with Wide-angle |
CN110675505A (en) * | 2019-10-10 | 2020-01-10 | 睿宇时空科技(重庆)有限公司 | Indoor and outdoor house watching system based on panoramic virtual and actual seamless fusion |
CN112285935B (en) * | 2020-11-17 | 2022-06-14 | 京东方科技集团股份有限公司 | Display assembly, assembling method thereof and wearable display device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN204101815U (en) * | 2014-09-28 | 2015-01-14 | 光场视界(北京)科技有限公司 | A kind of virtual glasses repaid based on optics and pattern distortion complementation |
CN104883561A (en) * | 2015-06-06 | 2015-09-02 | 深圳市虚拟现实科技有限公司 | Three-dimensional panoramic display method and head-mounted display device |
US20160133055A1 (en) * | 2014-11-07 | 2016-05-12 | Eye Labs, LLC | High resolution perception of content in a wide field of view of a head-mounted display |
CN205787371U (en) * | 2016-05-20 | 2016-12-07 | 成都理想境界科技有限公司 | A kind of near-eye display system for virtual reality |
CN106444023A (en) * | 2016-08-29 | 2017-02-22 | 北京知境科技有限公司 | Super-large field angle binocular stereoscopic display transmission type augmented reality system |
CN106464854A (en) * | 2014-02-26 | 2017-02-22 | 索尼电脑娱乐欧洲有限公司 | Image encoding and display |
CN106501940A (en) * | 2016-12-12 | 2017-03-15 | 湖南工业大学 | A kind of height degree of immersing Head-mounted display control system |
WO2017064502A1 (en) * | 2015-10-14 | 2017-04-20 | Sony Interactive Entertainment Inc. | Head-mountable display system |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102169283B (en) * | 2011-04-19 | 2012-12-26 | 浙江大学 | Suspension type 360-degree view field three-dimensional display device based on projector array |
CN104094162A (en) * | 2011-12-02 | 2014-10-08 | 杰瑞·G·奥格伦 | Wide field-of-view 3d stereo vision platform with dynamic control of immersive or heads-up display operation |
US9298008B2 (en) * | 2012-12-06 | 2016-03-29 | Cherif Atia Algreatly | 3D immersion technology |
US10440355B2 (en) * | 2015-11-06 | 2019-10-08 | Facebook Technologies, Llc | Depth mapping with a head mounted display using stereo cameras and structured light |
CN105807429B (en) * | 2016-05-20 | 2018-02-09 | 成都理想境界科技有限公司 | A kind of near-eye display system for virtual reality |
CN106775528A (en) * | 2016-12-12 | 2017-05-31 | 合肥华耀广告传媒有限公司 | A kind of touring system of virtual reality |
CN106909215B (en) * | 2016-12-29 | 2020-05-12 | 深圳市皓华网络通讯股份有限公司 | Fire fighting three-dimensional visual command system based on accurate positioning and augmented reality |
-
2017
- 2017-09-04 CN CN201710786863.XA patent/CN107462994A/en active Pending
- 2017-12-01 WO PCT/CN2017/114192 patent/WO2019041614A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106464854A (en) * | 2014-02-26 | 2017-02-22 | 索尼电脑娱乐欧洲有限公司 | Image encoding and display |
CN204101815U (en) * | 2014-09-28 | 2015-01-14 | 光场视界(北京)科技有限公司 | A kind of virtual glasses repaid based on optics and pattern distortion complementation |
US20160133055A1 (en) * | 2014-11-07 | 2016-05-12 | Eye Labs, LLC | High resolution perception of content in a wide field of view of a head-mounted display |
CN104883561A (en) * | 2015-06-06 | 2015-09-02 | 深圳市虚拟现实科技有限公司 | Three-dimensional panoramic display method and head-mounted display device |
WO2017064502A1 (en) * | 2015-10-14 | 2017-04-20 | Sony Interactive Entertainment Inc. | Head-mountable display system |
CN205787371U (en) * | 2016-05-20 | 2016-12-07 | 成都理想境界科技有限公司 | A kind of near-eye display system for virtual reality |
CN106444023A (en) * | 2016-08-29 | 2017-02-22 | 北京知境科技有限公司 | Super-large field angle binocular stereoscopic display transmission type augmented reality system |
CN106501940A (en) * | 2016-12-12 | 2017-03-15 | 湖南工业大学 | A kind of height degree of immersing Head-mounted display control system |
Also Published As
Publication number | Publication date |
---|---|
CN107462994A (en) | 2017-12-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019041614A1 (en) | Head-mounted immersive virtual reality display device and immersive virtual reality display method | |
US10257492B2 (en) | Image encoding and display | |
WO2017086263A1 (en) | Image processing device and image generation method | |
CN105739093B (en) | Through mode augmented reality near-to-eye | |
US10306202B2 (en) | Image encoding and display | |
US10054796B2 (en) | Display | |
WO2017173735A1 (en) | Video see-through-based smart eyeglasses system and see-through method thereof | |
WO2017118309A1 (en) | Closed wearable panoramic image-capturing and processing system, and operation method therefor | |
GB2498184A (en) | Interactive autostereoscopic three-dimensional display | |
US20180246331A1 (en) | Helmet-mounted display, visual field calibration method thereof, and mixed reality display system | |
TWI629506B (en) | Stereoscopic video see-through augmented reality device with vergence control and gaze stabilization, head-mounted display and method for near-field augmented reality application | |
WO2020215960A1 (en) | Method and device for determining area of gaze, and wearable device | |
US10582184B2 (en) | Instantaneous 180-degree 3D recording and playback systems | |
CN112655202A (en) | Reduced bandwidth stereo distortion correction for fisheye lens of head-mounted display | |
US11366315B2 (en) | Image processing apparatus, method for controlling the same, non-transitory computer-readable storage medium, and system | |
WO2018225804A1 (en) | Image display device, image display method, and image display program | |
JP2017046065A (en) | Information processor | |
US10972719B1 (en) | Head-mounted display having an image sensor array | |
WO2019113935A1 (en) | Closed wearable panoramic image capturing and processing system and operating method therefor | |
Yan et al. | Research summary on light field display technology based on projection | |
JPH09205660A (en) | Electronic animation stereoscopic vision system and stereoscopic vision eyeglass | |
WO2023005606A1 (en) | Display device and display method therefor | |
CN106371216B (en) | It is a kind of for observing the secondary display virtual real optical system of large screen | |
CN209991990U (en) | Map interaction system | |
KR102724897B1 (en) | System for providing augmented realty |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17923150 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17923150 Country of ref document: EP Kind code of ref document: A1 |