WO2020140720A1 - 虚拟现实场景的渲染方法、装置及设备 - Google Patents
虚拟现实场景的渲染方法、装置及设备 Download PDFInfo
- Publication number
- WO2020140720A1 WO2020140720A1 PCT/CN2019/124860 CN2019124860W WO2020140720A1 WO 2020140720 A1 WO2020140720 A1 WO 2020140720A1 CN 2019124860 W CN2019124860 W CN 2019124860W WO 2020140720 A1 WO2020140720 A1 WO 2020140720A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- rendering
- virtual reality
- area
- reality scene
- display
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/0093—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2215/00—Indexing scheme for image rendering
- G06T2215/16—Using real world measurements to influence rendering
Definitions
- the present application relates to the field of virtual reality technology, and in particular, to a rendering method, device and equipment for virtual reality scenes.
- virtual reality technology As a simulation technology that can create and experience a virtual world, virtual reality technology has gradually become one of the research hotspots in the direction of human-computer interaction. With the development of virtual reality technology, users have higher and higher requirements for the authenticity and sense of substitution of virtual reality.
- the first purpose of this application is to propose a rendering method for virtual reality scenes, which solves the problem of limiting the display refresh rate due to the low GPU rendering refresh rate by rendering the image in the idle rendering state and calling it during display Problem, thereby improving the display refresh rate, thereby reducing the delay, and reducing the user's dizziness when using the virtual reality device.
- the second purpose of the present application is to propose a rendering device for virtual reality scenes.
- the third purpose of this application is to propose a virtual reality device.
- the fourth object of the present application is to propose a computer-readable storage medium.
- An embodiment of the first aspect of the present application provides a rendering method of a virtual reality scene, including:
- the rendering method of the virtual reality scene in the embodiment of the present application obtains the virtual reality scene and judges whether it is in a rendering idle state; if so, performs image rendering on the virtual reality scene to generate a display image and stores the correspondence between the display image and the display area . Furthermore, the target area to be displayed of the virtual reality scene is acquired, and the target display image corresponding to the target area is called and displayed according to the corresponding relationship. Therefore, by rendering the image in the idle rendering state and calling it during display, the problem of limiting the display refresh rate due to the low GPU rendering refresh rate during GPU real-time rendering is solved, thereby increasing the display refresh rate and thereby reducing The delay is reduced to reduce the user's dizziness when using the virtual reality device. In addition, while improving the display refresh rate, the authenticity of the screen is ensured, and by calling the rendered display image, the power consumption of the device is reduced, and the endurance of the device is improved.
- the judging whether it is in the rendering idle state includes: judging whether it is in the scene initialization state, and if so, determining that it is in the rendering idle state;
- the rendering of the virtual reality scene includes: acquiring the virtual reality scene Gaze area; image rendering of the gaze area.
- the method further includes: acquiring a non-gazing area in the virtual reality scene; and performing image rendering on the non-gazing area in a preset order.
- the judging whether it is in a rendering idle state includes: judging whether it is in a scene display state and the gaze area has completed image rendering, and if so, it is determined to be in a rendering idle state;
- the rendering of the virtual reality scene includes: obtaining The non-gazing area in the virtual reality scene; performing image rendering on the non-gazing area according to a preset order.
- the method further includes: interrupting the image rendering operation when the position of the gaze area changes, and acquiring the changed gaze area; and performing image rendering on the changed gaze area.
- the rendering of the virtual reality scene includes: obtaining an area to be rendered, and determining whether the area to be rendered has completed image rendering; if so, determining the next area to be rendered according to the area to be rendered, And perform image rendering on the next region to be rendered.
- the image rendering of the virtual reality scene, generating a display image and storing the correspondence between the display image and the display area includes: establishing a coordinate system according to the virtual reality scene, and according to the coordinate system
- the virtual reality scene is divided into a plurality of display areas; image rendering is performed on the display area to generate a display image; and the display image and the correspondence between the display image and the display area are stored.
- An embodiment of the second aspect of the present application provides a rendering device of a virtual reality scene, including:
- the judgment module is used to obtain the virtual reality scene and judge whether it is in the rendering idle state
- the processing module is configured to perform image rendering on the virtual reality scene if it is learned that the rendering is idle, generate a display image, and store the correspondence between the display image and the display area;
- the display module is configured to acquire a target area to be displayed of the virtual reality scene, and call and display a target display image corresponding to the target area according to the corresponding relationship.
- the rendering device of the virtual reality scene of the embodiment of the present application obtains the virtual reality scene and judges whether it is in a rendering idle state; if it is, performs image rendering on the virtual reality scene to generate a display image and stores the correspondence between the display image and the display area . Furthermore, the target area to be displayed of the virtual reality scene is acquired, and the target display image corresponding to the target area is called and displayed according to the corresponding relationship. Therefore, by rendering the image in the idle rendering state and calling it during display, the problem of limiting the display refresh rate due to the low GPU rendering refresh rate during GPU real-time rendering is solved, thereby increasing the display refresh rate and thereby reducing The delay is reduced to reduce the user's dizziness when using the virtual reality device. In addition, while improving the display refresh rate, the authenticity of the screen is ensured, and by calling the rendered display image, the power consumption of the device is reduced, and the endurance of the device is improved.
- the judgment module is specifically used to judge whether it is in the scene initialization state, and if so, it is determined to be in the rendering idle state; the processing module is specifically used to: obtain the gaze area in the virtual reality scene; Gaze at the area for image rendering.
- the processing module is specifically configured to: obtain a non-gaze area in the virtual reality scene; and perform image rendering on the non-gaze area in a preset order.
- the judgment module is specifically used to judge whether it is in a scene display state and the gaze area has completed image rendering, and if so, it is determined to be in a rendering idle state; the processing module is specifically used to: obtain the virtual reality scene Non-gaze area; perform image rendering on the non-gaze area according to a preset order.
- the device further includes: an interrupt module, configured to interrupt an image rendering operation when the position of the gaze area changes, and acquire the changed gaze area; and perform image rendering on the changed gaze area.
- an interrupt module configured to interrupt an image rendering operation when the position of the gaze area changes, and acquire the changed gaze area; and perform image rendering on the changed gaze area.
- the processing module is further configured to: obtain an area to be rendered, and determine whether the area to be rendered has completed image rendering; if so, determine the next area to be rendered according to the area to be rendered, and Image rendering is performed on the next area to be rendered.
- An embodiment of the third aspect of the present application proposes a virtual reality device, including the apparatus for rendering a virtual reality scene as described in the embodiment of the second aspect.
- An embodiment of the fourth aspect of the present application provides a computer-readable storage medium on which a computer program is stored, which is characterized in that when the program is executed by a processor, a virtual reality scene as described in the embodiment of the first aspect is rendered method.
- FIG. 1 is a schematic flowchart of a method for rendering a virtual reality scene provided by an embodiment of the present application
- FIG. 2 is a schematic flowchart of another virtual reality scene rendering method provided by an embodiment of the present application.
- Figure 3 is a schematic diagram of a virtual reality scene
- FIG. 4 is a schematic diagram of rendering a virtual reality scene
- FIG. 5 is a schematic flowchart of another virtual reality scene rendering method provided by an embodiment of the present application.
- FIG. 6 is a schematic diagram of another virtual reality scene rendering
- FIG. 8 is a schematic structural diagram of a rendering device for a virtual reality scene provided by an embodiment of the present application.
- FIG. 9 is a schematic structural diagram of another virtual reality scene rendering device provided by an embodiment of the present application.
- FIG. 10 shows a block diagram of an exemplary electronic device suitable for implementing embodiments of the present application.
- FIG. 1 is a schematic flowchart of a method for rendering a virtual reality scene provided by an embodiment of the present application. As shown in FIG. 1, the method includes:
- Step 101 Obtain a virtual reality scene and determine whether it is in a rendering idle state.
- the virtual reality scene when displaying a virtual reality scene, the virtual reality scene may be acquired first. For example, when performing medical diagnosis through a virtual reality scene, a user's selection instruction on the medical scene can be received, and the medical virtual reality scene can be obtained according to the instruction. For another example, when teaching through a virtual reality scene, a user's selection instruction on an educational scene may be received, and an educational virtual reality scene may be obtained according to the instruction.
- Determine whether it is rendering idle For example, it is determined whether the virtual reality device is in a rendering idle state.
- it can be determined whether it is in the scene initialization state, and if it is known that the device is in the scene initialization state, it is determined to be in the rendering idle state.
- the device is in the scene display state, for example, it is determined that the device is in the scene display state when the virtual reality device displays the virtual reality scene display, and the gaze area has completed the image rendering, it is determined to be in the rendering idle state.
- Step 102 if so, perform image rendering on the virtual reality scene, generate a display image, and store the correspondence between the display image and the display area.
- the corresponding rendering information may be obtained according to the virtual reality scene that has been acquired to perform image rendering.
- three-dimensional model information, three-dimensional animation definition information, material information, etc. corresponding to the virtual reality scene can be acquired, the virtual reality scene can be image rendered, and then a display image can be generated.
- the virtual reality scene can be rendered according to different strategies to generate a display image, and the correspondence between the display image and the display area is stored.
- a partial area in the virtual reality scene may be acquired, and the partial area is image-rendered to generate a display image corresponding to the partial area.
- all regions of the virtual reality scene can be image rendered and display images can be generated.
- a coordinate system can be established based on the virtual reality scene, and the virtual reality scene can be divided into multiple display areas according to the coordinate system.
- image rendering is performed on the display area 1 to generate the display image 2
- the display image 2 is stored, and a mapping table is generated and stored according to the correspondence between the display image 2 and the display area 1.
- Step 103 Obtain the target area to be displayed of the virtual reality scene, and call and display the target display image corresponding to the target area according to the corresponding relationship.
- the position and posture information of the user's head can be detected according to sensors such as gyroscopes, accelerometers, etc., and the user's fixation point position in the virtual reality scene can be obtained according to the fixation point position
- the gaze area is determined, and then the target area to be displayed is determined according to the gaze area. Further, by querying the pre-stored correspondence between the display image and the display area, the corresponding target display image is obtained according to the target area, and the target display image is displayed.
- the virtual reality scene includes display areas A and B, respectively corresponding to the display images a and b, and the target area matches the display area A according to the position and posture information detected by the sensor, and then the corresponding display image a is obtained for display.
- GPUs Graphics Processing Unit
- GPUs are used to render images in real time when displaying virtual reality scenes. Due to the limited rendering capabilities of GPUs, the GPU's lower rendering refresh rate limits the display refresh rate during GPU real-time rendering , Resulting in a lower display refresh rate and a lower frame rate.
- a fake frame or call an image of the previous frame plus an offset to increase the frame rate
- a GPU rendering refresh rate of 30-40 Hz relative to a display refresh of 60-120 Hz The rate is much smaller, and the frame rate is increased by inserting a fake frame or calling the previous frame image plus an offset to display, but the picture is less authentic and the resulting picture is not the exact picture to be displayed at the time.
- the rendering method of the virtual reality scene of the embodiment of the present application compared with the way of inserting a fake frame or calling the previous frame image plus an offset for display, improves the display refresh rate while ensuring the authenticity of the screen, and, by The call to the display image after the rendering is realized can reduce the power consumption of the device.
- the rendering method of the virtual reality scene in the embodiment of the present application obtains the virtual reality scene and judges whether it is in a rendering idle state; if so, performs image rendering on the virtual reality scene to generate a display image and stores the correspondence between the display image and the display area . Furthermore, the target area to be displayed of the virtual reality scene is acquired, and the target display image corresponding to the target area is called and displayed according to the corresponding relationship. Therefore, by rendering the image in the idle rendering state and calling it during display, the problem of limiting the display refresh rate due to the low GPU rendering refresh rate during GPU real-time rendering is solved, thereby increasing the display refresh rate and thereby reducing The delay is reduced to reduce the user's dizziness when using the virtual reality device. In addition, while improving the display refresh rate, the authenticity of the screen is ensured, and by calling the rendered display image, the power consumption of the device is reduced, and the endurance of the device is improved.
- FIG. 2 is a schematic flowchart of another virtual reality scene rendering method provided by an embodiment of the present application. As shown in FIG. 2, the method includes:
- step 201 it is determined whether it is in the scene initialization state, and if so, it is determined to be in the rendering idle state.
- the scene initialization instruction may be acquired, and it is determined that the scene is in the initialization state within a preset time after receiving the initialization instruction.
- the preset time can be determined based on a large amount of experimental data, or can be set according to need, and there is no limitation here.
- the display state of the virtual reality device may be obtained, and if it is known that the current virtual reality device is not displaying a virtual reality scene, it is determined to be in a scene initialization state.
- Step 202 Obtain the gaze area in the virtual reality scene.
- Step 203 Perform image rendering on the gaze area.
- the position and posture information of the user's head can be detected by sensors such as gyroscopes, accelerometers, etc., to obtain the user's gaze point position in the virtual reality scene, and then based on the detected gaze point position and the gaze range of the human eye , Field angle, screen center and other parameters to determine the gaze area.
- sensors such as gyroscopes, accelerometers, etc.
- the gaze area in the virtual reality scene may be acquired, and the gaze area may be image rendered to generate a corresponding display image.
- the gaze area is rendered in the scene initialization state to generate a display image, and then when the scene initialization is completed and the scene is displayed, the display image corresponding to the gaze area is called for display.
- Step 204 Obtain the non-gazing area in the virtual reality scene.
- Step 205 Perform image rendering on the non-gazing area according to a preset order.
- the non-gazing area in the virtual reality scene can also be rendered, and a corresponding display image can be generated. That is to say, in the scene initialization state, image rendering of the entire virtual reality scene can be achieved to generate and store the corresponding display image, so that when the user uses the virtual reality scene, the display image is called and displayed to increase the frame rate and reduce the delay Time and dizziness.
- the distance between the non-gazing area and the center of the screen may be obtained, and then the image rendering of the non-gazing area may be performed in order from the center of the screen outwards, and a corresponding display image may be generated.
- the following describes the application scenarios for rendering the gaze area and the non-gaze area.
- points a and b in the figure are fixation points, and point c is an intermediate point between the two fixation points.
- a coordinate system can be established based on the virtual reality scene area, and the virtual reality scene can be divided into multiple areas according to the coordinate system, such as division It is multiple square areas in the figure.
- the display image is displayed to the human eye through the lens or the display component on the virtual reality device.
- FIG. 4 is a schematic plan view of the virtual reality scene area in FIG. 3.
- a(x1, y1) and b(x2, y2) the range of the left eye fixation area is (x1 ⁇ d, y1 ⁇ d), and the right eye fixation area is (x2 ⁇ d, y2 ⁇ d) the enclosed area, where d is the side length of the square area in the figure.
- the image is first rendered on the gaze area, and the display image is generated and stored.
- the non-gazing area is sequentially rendered and the display image is stored according to the order of the area numbers in the figure from small to large, and then called when the image of a certain area needs to be displayed.
- the area numbers in the figure can be determined according to the preset correspondence relationship of the data table, and only represent a sequence example of rendering the non-gazing area.
- the shape, size, and relative position of the gaze area are only examples, and are used to explain the present application, and are not limited herein.
- the display image is called and displayed when the user uses the virtual reality scene to increase the frame rate and reduce the delay Time and dizziness.
- FIG. 5 is a schematic flowchart of another virtual reality scene rendering method provided by an embodiment of the present application. As shown in FIG. 5, the method includes:
- step 301 it is determined whether it is in the scene display state and the gaze area has completed image rendering, and if so, it is determined that it is in the rendering idle state.
- the display state of the virtual reality device may be obtained, and if it is known that the current virtual reality device is displaying the virtual reality scene, it is further determined whether the current gaze area has completed image rendering. If it is known that the current gaze area has completed image rendering, it is determined to be in the rendering idle state.
- the virtual reality scene is rendered in real time by the GPU.
- the image rendering is performed on other areas in advance.
- the virtual reality scene includes areas 1 and 2. , 3.
- the current gaze area is area 1.
- image rendering is performed on areas 2 and 3 in advance, so that the pre-rendered display image can be called when other areas are displayed, thereby improving the display refresh rate and reducing delay .
- Step 302 Obtain the non-gazing area in the virtual reality scene.
- Step 303 Perform image rendering on the non-gazing area according to a preset order.
- the position of the fixation point may be detected every preset time, and if it is known that the fixation point position changes, it is determined that the fixation area position has changed. Furthermore, the current image rendering process is interrupted, and the gaze area is reacquired, and the corresponding image rendering operation is further performed according to the changed gaze area position.
- the current area to be rendered it is also possible to obtain the current area to be rendered and determine whether the area to be rendered has completed image rendering. If so, skip the area to be rendered and obtain the next area for image rendering in a preset order .
- the area to be rendered is an area to be rendered in the virtual reality scene. For example, when changing from the gaze area 1 to the gaze area 2, it is necessary to perform image rendering on the gaze area 2 to determine the gaze area 2 as the area to be rendered. Furthermore, it is determined whether the gaze area 2 has completed the image rendering. When the gaze area 2 has completed the image rendering, the gaze area 2 is skipped and the next area is acquired in the preset order for image rendering. For example, when the gaze area changes, there may be a situation where the current area to be rendered has been rendered. Therefore, by querying the correspondence between the display image and the display area, the rendered area can be determined to reduce repeated processing and improve processing efficiency .
- the original fixation area is an area determined according to fixation points a and b.
- the fixation area is changed, and the interruption setting is performed according to the program, and Obtain new fixation points a1, b1, and determine the current fixation area. Then start to render the current fixation area.
- the image rendering is performed out of the image according to the center point c1 of the new fixation points a1 and b1. Among them, when the scene area that has been rendered is skipped, skip and follow the current The algorithm continues to render images.
- the virtual reality scene is acquired at the beginning, for example, the user’s selection instruction is received to select the corresponding virtual reality scene, and when the virtual reality scene is a known virtual reality scene, such as a medical and other virtual reality scene, the next step is performed; otherwise, proceed Virtual reality scene display.
- the gaze point coordinates after entering the scene are determined by the posture of the sensors such as the gyroscope and the accelerometer.
- the gaze area is determined according to the gaze point coordinates, gaze range, field angle, and screen center, where the gaze range, field angle, and screen center can be obtained from related measurements; otherwise, the virtual reality scene is displayed.
- image rendering is performed on the gaze area of the virtual reality scene, and when the gaze area rendering is completed and the gaze area is unchanged, the non-gaze area is sequentially image rendered according to the preset data table correspondence relationship, and the generated display image is stored . Further, when the gaze area changes, a new gaze area is acquired and the corresponding image rendering operation is performed.
- the scene initialization is completed, enter the virtual reality scene display and start to call the display image stored in the current coordinate system for display. Therefore, by rendering the image in the idle rendering state and calling it during display, the problem of limiting the display refresh rate due to the low GPU rendering refresh rate during GPU real-time rendering is solved, thereby increasing the display refresh rate and thereby reducing The delay is reduced to reduce the user's dizziness when using the virtual reality device.
- the display refresh rate the authenticity of the screen is ensured, and by calling the rendered display image, the power consumption of the device is reduced, and the endurance of the device is improved.
- the present application also proposes a rendering device for virtual reality scenes.
- FIG. 8 is a schematic structural diagram of a rendering apparatus for a virtual reality scene provided by an embodiment of the present application. As shown in FIG. 8, the apparatus includes: a judgment module 100, a processing module 200, and a display module 300.
- the determination module 100 is used to obtain a virtual reality scene and determine whether it is in a rendering idle state.
- the processing module 200 is configured to perform image rendering on the virtual reality scene if it is learned that the rendering is in an idle state, generate a display image, and store the correspondence between the display image and the display area.
- the display module 300 is configured to acquire the target area to be displayed of the virtual reality scene, and call and display the target display image corresponding to the target area according to the corresponding relationship.
- the device shown in FIG. 9 further includes: an interrupt module 400.
- the interrupt module 400 is used to determine whether the position of the gaze area has changed; if so, interrupt the current image rendering and obtain the changed gaze area; and perform image rendering on the changed gaze area.
- the judgment module 100 is specifically used to judge whether it is in the scene initialization state, and if it is, it is determined to be in the rendering idle state; the processing module 200 is specifically used to: obtain the gaze area in the virtual reality scene; perform on the gaze area Image rendering.
- processing module 200 is specifically configured to: obtain the non-gazing area in the virtual reality scene; and perform image rendering on the non-gazing area according to a preset order.
- the judgment module 100 is specifically used to judge whether it is in the scene display state and the gaze area has completed the image rendering, and if so, it is determined to be in the rendering idle state; the processing module 200 is specifically used to: obtain the non-gaze area in the virtual reality scene; according to the preset Sequentially render images to non-gazing areas.
- processing module 200 is also used to: obtain the current area to be rendered, and determine whether the area to be rendered has completed image rendering; if so, skip the area to be rendered.
- the module in this embodiment may be a module with data processing capability and/or program execution capability, including but not limited to a processor, a single chip, a digital signal processing (Digital Signal Processing, DSP), an application specific integrated circuit (Application Specific Integrated Circuits, ASIC) and other devices.
- a processor Digital Signal Processing, DSP
- ASIC Application Specific Integrated Circuits
- it may be a central processing unit (CPU), a field programmable gate array (FPGA), or a tensor processing unit (TPU).
- Each module may include one or more chips in the above device.
- the rendering device of the virtual reality scene of the embodiment of the present application obtains the virtual reality scene and judges whether it is in a rendering idle state; if it is, performs image rendering on the virtual reality scene to generate a display image and stores the correspondence between the display image and the display area . Furthermore, the target area to be displayed of the virtual reality scene is acquired, and the target display image corresponding to the target area is called and displayed according to the corresponding relationship. Therefore, by rendering the image in the idle rendering state and calling it during display, the problem of limiting the display refresh rate due to the low GPU rendering refresh rate during GPU real-time rendering is solved, thereby increasing the display refresh rate and thereby reducing The delay is reduced to reduce the user's dizziness when using the virtual reality device. In addition, while improving the display refresh rate, the authenticity of the screen is ensured, and by calling the rendered display image, the power consumption of the device is reduced, and the endurance of the device is improved.
- the present application also proposes a virtual reality device, including the rendering device of the virtual reality scene as described in any one of the foregoing embodiments.
- the present application also proposes an electronic device, including a processor and a memory; wherein, the processor runs the program corresponding to the executable program code by reading the executable program code stored in the memory for use in The method for rendering a virtual reality scene as described in any of the foregoing embodiments is implemented.
- the present application also proposes a computer program product, which implements the virtual reality scene rendering method described in any of the foregoing embodiments when the instructions in the computer program product are executed by the processor.
- the present application also proposes a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the rendering method of the virtual reality scene described in any of the foregoing embodiments.
- FIG. 10 shows a block diagram of an exemplary electronic device suitable for implementing embodiments of the present application.
- the electronic device 12 shown in FIG. 10 is only an example, and should not bring any limitation to the functions and use scope of the embodiments of the present application.
- the electronic device 12 is represented in the form of a general-purpose computing device.
- the components of the electronic device 12 may include, but are not limited to, one or more processors or processing units 16, a system memory 28, and a bus 18 connecting different system components (including the system memory 28 and the processing unit 16).
- the bus 18 represents one or more of several types of bus structures, including a memory bus or a memory controller, a peripheral bus, a graphics acceleration port, a processor, or a local bus using any of a variety of bus structures.
- these architectures include, but are not limited to, industry standard architecture (Industry Standard Architecture; hereinafter referred to as: ISA) bus, micro channel architecture (Micro Channel Architecture (hereinafter referred to as: MAC) bus, enhanced ISA bus, video electronics Standard Association (Video Electronics Standards Association; hereinafter referred to as: VESA) local bus and peripheral component interconnection (Peripheral Component Interconnection; hereinafter referred to as: PCI) bus.
- Industry Standard Architecture hereinafter referred to as: ISA
- MAC Micro Channel Architecture
- VESA Video Electronics Standards Association
- PCI peripheral component interconnection
- the electronic device 12 typically includes a variety of computer system readable media. These media may be any available media that can be accessed by the electronic device 12, including volatile and non-volatile media, removable and non-removable media.
- the memory 28 may include a computer system readable medium in the form of volatile memory, such as random access memory (Random Access Memory; hereinafter referred to as RAM) 30 and/or cache memory 32.
- RAM random access memory
- the electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media.
- the storage system 34 may be used to read and write non-removable, non-volatile magnetic media (not shown in FIG. 10, commonly referred to as a "hard drive").
- a disk drive for reading and writing to a removable nonvolatile disk (for example, "floppy disk") and a removable nonvolatile disk (for example: compact disk read only memory (Compact) Disc Read Only Memory (hereinafter referred to as: CD-ROM), digital multi-function read-only disc (Digital Video Disc Read Only (hereinafter referred to as: DVD-ROM) or other optical media) read and write optical disc drive.
- each drive may be connected to the bus 18 through one or more data medium interfaces.
- the memory 28 may include at least one program product having a set of (eg, at least one) program modules configured to perform the functions of the embodiments of the present application.
- a program/utility tool 40 having a set of (at least one) program modules 42 may be stored in, for example, the memory 28.
- Such program modules 42 include but are not limited to an operating system, one or more application programs, other program modules, and program data Each of these examples or some combination may include the implementation of a network environment.
- the program module 42 generally performs the functions and/or methods in the embodiments described in this application.
- the electronic device 12 may also communicate with one or more external devices 14 (eg, keyboard, pointing device, display 24, etc.), and may also communicate with one or more devices that enable users to interact with the computer system/server 12, and/or Or communicate with any device (eg, network card, modem, etc.) that enables the computer system/server 12 to communicate with one or more other computing devices.
- This communication can be performed through an input/output (I/O) interface 22.
- the electronic device 12 can also be connected to one or more networks (such as a local area network (Local Area Network; hereinafter referred to as LAN), wide area network (Wide Area Network; hereinafter referred to as WAN) and/or a public network such as the Internet through the network adapter 20. ) Communication.
- networks such as a local area network (Local Area Network; hereinafter referred to as LAN), wide area network (Wide Area Network; hereinafter referred to as WAN) and/or a public network such as the Internet through the network adapter
- the network adapter 20 communicates with other modules of the electronic device 12 via the bus 18. It should be understood that although not shown in the figure, other hardware and/or software modules may be used in conjunction with the electronic device 12, including but not limited to: microcode, device driver, redundant processing unit, external disk drive array, RAID system, tape drive And data backup storage system.
- the processing unit 16 executes various functional applications and data processing by running a program stored in the system memory 28, for example, to implement the method mentioned in the foregoing embodiment.
- first and second are used for description purposes only, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated.
- the features defined as “first” and “second” may include at least one of the features explicitly or implicitly.
- the meaning of “plurality” is at least two, such as two, three, etc., unless specifically defined otherwise.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Human Computer Interaction (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims (15)
- 一种虚拟现实场景的渲染方法,其特征在于,包括以下步骤:获取虚拟现实场景,并判断是否处于渲染空闲状态;若是,则对所述虚拟现实场景进行图像渲染,生成显示图像并存储所述显示图像和显示区域的对应关系;获取所述虚拟现实场景的待显示的目标区域,根据所述对应关系调用与所述目标区域对应的目标显示图像并显示。
- 如权利要求1所述的虚拟现实场景的渲染方法,其特征在于,所述判断是否处于渲染空闲状态包括:判断是否处于场景初始化状态,若是,则确定处于渲染空闲状态;所述对所述虚拟现实场景进行图像渲染包括:获取所述虚拟现实场景中的注视区域;对所述注视区域进行图像渲染。
- 如权利要求2所述的虚拟现实场景的渲染方法,其特征在于,还包括:获取所述虚拟现实场景中的非注视区域;按照预设顺序对所述非注视区域进行图像渲染。
- 如权利要求1-3任一所述的虚拟现实场景的渲染方法,其特征在于,所述判断是否处于渲染空闲状态包括:判断是否处于场景显示状态且注视区域已完成图像渲染,若是,则确定处于渲染空闲状态;所述对所述虚拟现实场景进行渲染包括:获取所述虚拟现实场景中的非注视区域;按照预设顺序对所述非注视区域进行图像渲染。
- 如权利要求3所述的虚拟现实场景的渲染方法,其特征在于,还包括:当所述注视区域位置改变时,中断图像渲染操作,并获取改变后的注视区域;对所述改变后的注视区域进行图像渲染。
- 如权利要求1-5任一所述的虚拟现实场景的渲染方法,其特征在于,所述对所述虚拟现实场景进行渲染包括:获取待渲染区域,并判断所述待渲染区域是否已经完成图像渲染;若是,则根据所述待渲染区域确定下一个待渲染区域,并对所述下一个待渲染区域进行图像渲染。
- 如权利要求1-6任一所述的虚拟现实场景的渲染方法,其特征在于,所述对所述虚拟现实场景进行图像渲染,生成显示图像并存储所述显示图像和显示区域的对应关系包括:根据所述虚拟现实场景建立坐标系,并根据所述坐标系将所述虚拟现实场景划分为多个显示区域;对所述显示区域进行图像渲染,生成显示图像;存储所述显示图像以及所述显示图像和所述显示区域的对应关系。
- 一种虚拟现实场景的渲染装置,其特征在于,包括:判断模块,用于获取虚拟现实场景,并判断是否处于渲染空闲状态;处理模块,用于若获知处于所述渲染空闲状态,则对所述虚拟现实场景进行图像渲染,生成显示图像并存储所述显示图像和显示区域的对应关系;显示模块,用于获取所述虚拟现实场景的待显示的目标区域,根据所述对应关系调用与所述目标区域对应的目标显示图像并显示。
- 如权利要求8所述的虚拟现实场景的渲染装置,其特征在于,所述判断模块具体用于:判断是否处于场景初始化状态,若是,则确定处于渲染空闲状态;所述处理模块具体用于:获取所述虚拟现实场景中的注视区域;对所述注视区域进行图像渲染。
- 如权利要求9所述的虚拟现实场景的渲染装置,其特征在于,所述处理模块具体用于:获取所述虚拟现实场景中的非注视区域;按照预设顺序对所述非注视区域进行图像渲染。
- 如权利要求8-10任一所述的虚拟现实场景的渲染装置,其特征在于,所述判断模块具体用于:判断是否处于场景显示状态且注视区域已完成图像渲染,若是,则确定处于渲染空闲状态;所述处理模块具体用于:获取所述虚拟现实场景中的非注视区域;按照预设顺序对所述非注视区域进行图像渲染。
- 如权利要求10所述的虚拟现实场景的渲染装置,其特征在于,还包括:中断模块,用于当所述注视区域位置改变时,中断图像渲染操作,并获取改变后的注视区域;对所述改变后的注视区域进行图像渲染。
- 如权利要求8-12任一所述的虚拟现实场景的渲染装置,其特征在于,所述处理模块还用于:获取待渲染区域,并判断所述待渲染区域是否已经完成图像渲染;若是,则根据所述待渲染区域确定下一个待渲染区域,并对所述下一个待渲染区域进行图像渲染。
- 一种虚拟现实设备,其特征在于,包括如权利要求8-13任一项所述的虚拟现实场景的渲染装置。
- 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-7中任一项所述的虚拟现实场景的渲染方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/764,401 US11263803B2 (en) | 2019-01-02 | 2019-12-12 | Virtual reality scene rendering method, apparatus and device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910001295.7 | 2019-01-02 | ||
CN201910001295.7A CN109741463B (zh) | 2019-01-02 | 2019-01-02 | 虚拟现实场景的渲染方法、装置及设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020140720A1 true WO2020140720A1 (zh) | 2020-07-09 |
Family
ID=66363116
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/124860 WO2020140720A1 (zh) | 2019-01-02 | 2019-12-12 | 虚拟现实场景的渲染方法、装置及设备 |
Country Status (3)
Country | Link |
---|---|
US (1) | US11263803B2 (zh) |
CN (1) | CN109741463B (zh) |
WO (1) | WO2020140720A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114845147A (zh) * | 2022-04-29 | 2022-08-02 | 北京奇艺世纪科技有限公司 | 屏幕渲染方法、显示画面合成方法装置及智能终端 |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109741463B (zh) * | 2019-01-02 | 2022-07-19 | 京东方科技集团股份有限公司 | 虚拟现实场景的渲染方法、装置及设备 |
CN110351480B (zh) * | 2019-06-13 | 2021-01-15 | 歌尔光学科技有限公司 | 用于电子设备的图像处理方法、装置及电子设备 |
CN110930307B (zh) * | 2019-10-31 | 2022-07-08 | 江苏视博云信息技术有限公司 | 图像处理方法和装置 |
CN111402263B (zh) * | 2020-03-04 | 2023-08-29 | 南方电网科学研究院有限责任公司 | 高分大屏可视化优化方法 |
CN111381967A (zh) * | 2020-03-09 | 2020-07-07 | 中国联合网络通信集团有限公司 | 虚拟对象的处理方法及装置 |
CN111367414B (zh) * | 2020-03-10 | 2020-10-13 | 厦门络航信息技术有限公司 | 虚拟现实对象控制方法、装置、虚拟现实系统及设备 |
CN111429333A (zh) * | 2020-03-25 | 2020-07-17 | 京东方科技集团股份有限公司 | 一种gpu动态调频的方法、装置及系统 |
US11721052B2 (en) * | 2020-09-24 | 2023-08-08 | Nuvolo Technologies Corporation | Floorplan image tiles |
US12014030B2 (en) | 2021-08-18 | 2024-06-18 | Bank Of America Corporation | System for predictive virtual scenario presentation |
CN114125301B (zh) * | 2021-11-29 | 2023-09-19 | 卡莱特云科技股份有限公司 | 一种虚拟现实技术拍摄延迟处理方法及装置 |
US12106488B2 (en) * | 2022-05-09 | 2024-10-01 | Qualcomm Incorporated | Camera frame extrapolation for video pass-through |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6791549B2 (en) * | 2001-12-21 | 2004-09-14 | Vrcontext S.A. | Systems and methods for simulating frames of complex virtual environments |
CN105892683A (zh) * | 2016-04-29 | 2016-08-24 | 上海乐相科技有限公司 | 一种显示方法及目标设备 |
CN106652004A (zh) * | 2015-10-30 | 2017-05-10 | 北京锤子数码科技有限公司 | 基于头戴式可视设备对虚拟现实进行渲染的方法及装置 |
CN107274472A (zh) * | 2017-06-16 | 2017-10-20 | 福州瑞芯微电子股份有限公司 | 一种提高vr播放帧率的方法和装置 |
CN109741463A (zh) * | 2019-01-02 | 2019-05-10 | 京东方科技集团股份有限公司 | 虚拟现实场景的渲染方法、装置及设备 |
Family Cites Families (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2003039698A1 (en) * | 2001-11-02 | 2003-05-15 | Atlantis Cyberspace, Inc. | Virtual reality game system with pseudo 3d display driver & mission control |
US20110273466A1 (en) * | 2010-05-10 | 2011-11-10 | Canon Kabushiki Kaisha | View-dependent rendering system with intuitive mixed reality |
CN103164541B (zh) * | 2013-04-15 | 2017-04-12 | 北京世界星辉科技有限责任公司 | 图片呈现方法及设备 |
EP3301585A3 (en) * | 2013-08-02 | 2018-07-25 | Huawei Technologies Co., Ltd. | Image display method and apparatus |
EP3129958B1 (en) * | 2014-04-05 | 2021-06-02 | Sony Interactive Entertainment LLC | Method for efficient re-rendering objects to vary viewports and under varying rendering and rasterization parameters |
US10162412B2 (en) * | 2015-03-27 | 2018-12-25 | Seiko Epson Corporation | Display, control method of display, and program |
US10089790B2 (en) * | 2015-06-30 | 2018-10-02 | Ariadne's Thread (Usa), Inc. | Predictive virtual reality display system with post rendering correction |
CN107850981B (zh) * | 2015-08-04 | 2021-06-01 | 株式会社和冠 | 用户通知方法、手写数据取入装置及存储介质 |
CN105472207A (zh) * | 2015-11-19 | 2016-04-06 | 中央电视台 | 一种视音频文件渲染方法及装置 |
CN105976424A (zh) * | 2015-12-04 | 2016-09-28 | 乐视致新电子科技(天津)有限公司 | 一种图像渲染处理的方法及装置 |
US10218968B2 (en) * | 2016-03-05 | 2019-02-26 | Maximilian Ralph Peter von und zu Liechtenstein | Gaze-contingent display technique |
CN106484116B (zh) * | 2016-10-19 | 2019-01-08 | 腾讯科技(深圳)有限公司 | 媒体文件的处理方法和装置 |
US10255714B2 (en) * | 2016-08-24 | 2019-04-09 | Disney Enterprises, Inc. | System and method of gaze predictive rendering of a focal area of an animation |
CN106331823B (zh) * | 2016-08-31 | 2019-08-20 | 北京奇艺世纪科技有限公司 | 一种视频播放方法及装置 |
GB2553353B (en) * | 2016-09-05 | 2021-11-24 | Advanced Risc Mach Ltd | Graphics processing systems and graphics processors |
KR102719606B1 (ko) * | 2016-09-09 | 2024-10-21 | 삼성전자주식회사 | 이미지 표시 방법, 저장 매체 및 전자 장치 |
CN106485790A (zh) * | 2016-09-30 | 2017-03-08 | 珠海市魅族科技有限公司 | 一种画面显示的方法以及装置 |
CN106600515A (zh) * | 2016-11-07 | 2017-04-26 | 深圳市金立通信设备有限公司 | 一种虚拟现实低延迟处理方法及终端 |
US10735691B2 (en) * | 2016-11-08 | 2020-08-04 | Rockwell Automation Technologies, Inc. | Virtual reality and augmented reality for industrial automation |
CN116109467A (zh) * | 2016-11-16 | 2023-05-12 | 奇跃公司 | 具有降低功率渲染的混合现实系统 |
CN106502427B (zh) * | 2016-12-15 | 2023-12-01 | 北京国承万通信息科技有限公司 | 虚拟现实系统及其场景呈现方法 |
CN106919360B (zh) * | 2017-04-18 | 2020-04-14 | 珠海全志科技股份有限公司 | 一种头部姿态补偿方法及装置 |
US10109039B1 (en) * | 2017-04-24 | 2018-10-23 | Intel Corporation | Display engine surface blending and adaptive texel to pixel ratio sample rate system, apparatus and method |
CN107145235A (zh) * | 2017-05-11 | 2017-09-08 | 杭州幻行科技有限公司 | 一种虚拟现实系统 |
CN107317987B (zh) * | 2017-08-14 | 2020-07-03 | 歌尔股份有限公司 | 虚拟现实的显示数据压缩方法和设备、系统 |
CN107516335A (zh) * | 2017-08-14 | 2017-12-26 | 歌尔股份有限公司 | 虚拟现实的图形渲染方法和装置 |
CN107562212A (zh) * | 2017-10-20 | 2018-01-09 | 网易(杭州)网络有限公司 | 虚拟现实场景防抖方法、装置、存储介质及头戴显示设备 |
US10949947B2 (en) * | 2017-12-29 | 2021-03-16 | Intel Corporation | Foveated image rendering for head-mounted display devices |
CN109032350B (zh) * | 2018-07-10 | 2021-06-29 | 深圳市创凯智能股份有限公司 | 眩晕感减轻方法、虚拟现实设备及计算机可读存储介质 |
CN109242943B (zh) * | 2018-08-21 | 2023-03-21 | 腾讯科技(深圳)有限公司 | 一种图像渲染方法、装置及图像处理设备、存储介质 |
US20200110271A1 (en) * | 2018-10-04 | 2020-04-09 | Board Of Trustees Of Michigan State University | Photosensor Oculography Eye Tracking For Virtual Reality Systems |
WO2020112561A1 (en) * | 2018-11-30 | 2020-06-04 | Magic Leap, Inc. | Multi-modal hand location and orientation for avatar movement |
-
2019
- 2019-01-02 CN CN201910001295.7A patent/CN109741463B/zh active Active
- 2019-12-12 WO PCT/CN2019/124860 patent/WO2020140720A1/zh active Application Filing
- 2019-12-12 US US16/764,401 patent/US11263803B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6791549B2 (en) * | 2001-12-21 | 2004-09-14 | Vrcontext S.A. | Systems and methods for simulating frames of complex virtual environments |
CN106652004A (zh) * | 2015-10-30 | 2017-05-10 | 北京锤子数码科技有限公司 | 基于头戴式可视设备对虚拟现实进行渲染的方法及装置 |
CN105892683A (zh) * | 2016-04-29 | 2016-08-24 | 上海乐相科技有限公司 | 一种显示方法及目标设备 |
CN107274472A (zh) * | 2017-06-16 | 2017-10-20 | 福州瑞芯微电子股份有限公司 | 一种提高vr播放帧率的方法和装置 |
CN109741463A (zh) * | 2019-01-02 | 2019-05-10 | 京东方科技集团股份有限公司 | 虚拟现实场景的渲染方法、装置及设备 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114845147A (zh) * | 2022-04-29 | 2022-08-02 | 北京奇艺世纪科技有限公司 | 屏幕渲染方法、显示画面合成方法装置及智能终端 |
CN114845147B (zh) * | 2022-04-29 | 2024-01-16 | 北京奇艺世纪科技有限公司 | 屏幕渲染方法、显示画面合成方法装置及智能终端 |
Also Published As
Publication number | Publication date |
---|---|
CN109741463B (zh) | 2022-07-19 |
US20210225064A1 (en) | 2021-07-22 |
US11263803B2 (en) | 2022-03-01 |
CN109741463A (zh) | 2019-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020140720A1 (zh) | 虚拟现实场景的渲染方法、装置及设备 | |
CN112042182B (zh) | 通过面部表情操纵远程化身 | |
US11954805B2 (en) | Occlusion of virtual objects in augmented reality by physical objects | |
EP3786878A1 (en) | Image resolution processing method, system and apparatus, and storage medium and device | |
US9325960B2 (en) | Maintenance of three dimensional stereoscopic effect through compensation for parallax setting | |
JP2011510407A (ja) | グラフィックス処理システムにおけるオフスクリーンサーフェスのためのマルチバッファサポート | |
JP4234089B2 (ja) | エンタテインメント装置、オブジェクト表示装置、オブジェクト表示方法、プログラム、およびキャラクタ表示方法 | |
JP2008077372A (ja) | 画像処理装置、画像処理装置の制御方法及びプログラム | |
WO2022121653A1 (zh) | 确定透明度的方法、装置、电子设备和存储介质 | |
JP4031509B1 (ja) | 画像処理装置、画像処理装置の制御方法及びプログラム | |
JPH10295934A (ja) | ビデオゲーム装置及びモデルのテクスチャの変化方法 | |
JP3231029B2 (ja) | レンダリング方法及び装置、ゲーム装置、並びに立体モデルをレンダリングするためのプログラムを格納したコンピュータ読み取り可能な記録媒体 | |
TWI566205B (zh) | 圖形驅動程式在顯像圖框中近似動態模糊的方法 | |
US11551383B2 (en) | Image generating apparatus, image generating method, and program for generating an image using pixel values stored in advance | |
WO2023000547A1 (zh) | 图像处理方法、装置和计算机可读存储介质 | |
US11640688B2 (en) | Apparatus and method for data generation | |
JP5063022B2 (ja) | プログラム、情報記憶媒体及び画像生成システム | |
WO2022121654A1 (zh) | 确定透明度的方法、装置、电子设备及存储介质 | |
US11966278B2 (en) | System and method for logging visible errors in a videogame | |
TWI817832B (zh) | 頭戴顯示裝置、其控制方法及非暫態電腦可讀取儲存媒體 | |
CN115426505B (zh) | 基于面部捕捉的预设表情特效触发方法及相关设备 | |
EP4451101A1 (en) | Pick operand detection method and apparatus, computer device, readable storage medium, and computer program product | |
WO2022121652A1 (zh) | 确定透明度的方法、装置、电子设备及存储介质 | |
US20230158403A1 (en) | Graphics data processing | |
WO2023158375A2 (zh) | 表情包生成方法及设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19906868 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19906868 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19906868 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 24.08.2021) |