Nothing Special   »   [go: up one dir, main page]

WO2018120657A1 - Method and device for sharing virtual reality data - Google Patents

Method and device for sharing virtual reality data Download PDF

Info

Publication number
WO2018120657A1
WO2018120657A1 PCT/CN2017/087725 CN2017087725W WO2018120657A1 WO 2018120657 A1 WO2018120657 A1 WO 2018120657A1 CN 2017087725 W CN2017087725 W CN 2017087725W WO 2018120657 A1 WO2018120657 A1 WO 2018120657A1
Authority
WO
WIPO (PCT)
Prior art keywords
terminal
data
environment
image
virtual reality
Prior art date
Application number
PCT/CN2017/087725
Other languages
French (fr)
Chinese (zh)
Inventor
商泽利
周胜丰
陈浩
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201780005621.0A priority Critical patent/CN108431872A/en
Publication of WO2018120657A1 publication Critical patent/WO2018120657A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof

Definitions

  • the present application relates to the field of communication technologies, and in particular, to virtual reality technology.
  • Virtual Reality (VR) technology refers to the use of computer simulation to generate a virtual world in a three-dimensional space, providing users with simulations of visual, tactile and other senses, allowing users to be as immersive as they can, in a timely and unrestricted manner. Observe things in three dimensions.
  • the present application provides a method and device for sharing virtual reality data, so as to realize sharing of virtual reality data of the same scene among multiple users, so that multiple users in different places can experience being in the same scene. The effect of the experience.
  • the present application provides a method for sharing virtual reality data, in which the first terminal collects environmental data of an environment in which the current terminal is located, and the environment data includes at least an environment image of an environment in which the first terminal is located;
  • the environment image is a two-dimensional image
  • the environment image in the environment data is converted from a two-dimensional image into a three-dimensional image, thereby converting the environment data into a virtual one for reflecting a three-dimensional scene in which the first terminal is located Real data, such that after transmitting the virtual reality data to the at least one second terminal, the user of the second terminal can feel that the first terminal is located by watching the virtual reality data output by the second terminal
  • the visual experience in the environment thus experiencing the experience of the same environment scene as the first terminal.
  • converting the environment image in the environment data from a two-dimensional image to a three-dimensional image may be to create depth information for an environment image in the environment data, and then using the depth information and the The environment image is constructed by constructing a three-dimensional image corresponding to the environment image, so that the environment data is converted into virtual display data by replacing the two-dimensional environment image in the environment data with the three-dimensional environment image.
  • the environment data may be directly determined as virtual reality data for reflecting a three-dimensional scene of an environment in which the first terminal is located, and Transmitting the virtual reality data to the at least one second terminal, so that data sharing the three-dimensional scene of the environment in which the first terminal is located is shared to the at least one second terminal, so that the user of the second terminal can experience based on the virtual reality data.
  • the first terminal may further perform virtual reality scene rendering on the three-dimensional environment image in the virtual reality data, where the virtual reality scene rendering may include: reverse distortion, One or several of anti-dispersion and interpupillary adjustment.
  • virtual reality scene rendering may include: reverse distortion, One or several of anti-dispersion and interpupillary adjustment.
  • the viewing angle of the user on the first terminal side may also be collected; correspondingly, at the first And transmitting, by the terminal, the virtual reality data to the at least one second terminal, the first terminal may further send the viewing angle to the at least one second terminal, so that the second terminal renders the virtual reality data according to the viewing angle, and outputs the The virtual reality data presented by the viewing angle is viewed, so that the user of the second terminal can view the environment scene where the first terminal is located at the same perspective as the user of the first terminal.
  • the first terminal may further encode the virtual reality data before the first terminal transmits the virtual reality data to the at least one second terminal.
  • the present application also provides a terminal having a function of implementing actual terminal behavior in the above method.
  • the functions may be implemented by hardware or by corresponding software implemented by hardware.
  • the hardware or software includes one or more modules corresponding to the functions described above.
  • the terminal includes an image acquisition device, a data interface, a processor, and a communication module.
  • the image collection device is configured to collect an environment image of an environment in which the terminal is currently located; and the data interface is configured to obtain environment data of the current environment of the terminal, where the environment data includes at least: an environment image collected by the image collection device;
  • the device is configured to support the terminal to perform a corresponding function in the above method, as the processor may be configured to use the environment image in the environment data by two if the environment image is a two-dimensional image Converting the dimensional image into a three-dimensional image, and obtaining virtual reality data converted by the environmental data, the virtual reality data is used to reflect a three-dimensional scene of an environment in which the terminal is located; the communication module is configured to use the virtual reality data Transfer to at least one receiving terminal.
  • the processor is further configured to determine the environment data as the virtual space for reflecting a three-dimensional scene of an environment in which the terminal is located, if the environment image is a three-dimensional image. Realistic data.
  • the processor is further configured to: after obtaining the virtual reality data, perform virtual reality scene rendering on the three-dimensional environment image in the virtual reality data, where the virtual reality scene rendering includes: One or more of distortion, inverse dispersion, and interpupillary adjustment.
  • the terminal may further include: a sensor, configured to sense a viewing angle of the environment of the user on the terminal side;
  • the data interface is further used for viewing the viewing angle of the environment by the user on the terminal side collected by the sensor;
  • the communication module is further configured to send the viewing angle to the at least one second terminal while the virtual reality data is transmitted to the at least one second terminal, so that the second terminal follows the viewing
  • the perspective view renders the virtual reality data and outputs virtual reality data presented at the viewing angle.
  • the processor is further configured to: before the communication module transmits the virtual reality data to the at least one receiving terminal, determine a network state of the terminal; determine an encoding mode based on the network state; The virtual reality data is encoded.
  • the second aspect of the embodiment of the present application is consistent with the design of the first aspect, and the technical means are similar.
  • the specific beneficial effects brought by the technical solution please refer to the first aspect, and details are not described herein.
  • the present application further provides another method for sharing virtual reality data, in which the terminal acquires target data to be shared by the first terminal, the target data includes at least one frame image, and in the target data.
  • the image is a two-dimensional image
  • the image in the target data is converted from the two-dimensional image into a three-dimensional image, and the virtual reality data converted by the target data is obtained, and the virtual reality data can be reflected according to the target data.
  • the user of the second terminal can view the virtual reality data on the second terminal, thereby realizing sharing of the virtual reality data; and, second The user of the terminal can experience the real environment scene corresponding to the image played by the first terminal side through the virtual reality data.
  • the terminal acquiring the target data to be shared by the first terminal may be the environment data of the environment in which the first terminal is located, and the environment data includes: an environment image of the environment in which the first terminal is located.
  • the acquiring, by the terminal, the target data to be shared by the first terminal may be acquiring video data currently being played by the first terminal, where the video data includes at least one frame of video image.
  • the terminal acquiring the video data currently played by the first terminal may be acquiring a target play window displayed in the first terminal (such as a game window of a game application, a video display window of a browser, After the video data currently being played in the play window of the player or the like is sent to the second terminal by the video data converted by the video data played by the target window, the user of the second terminal outputs the output of the second terminal.
  • the virtual reality data may be such that the user is in a three-dimensional environment scene corresponding to the target data in the first terminal (eg, in a three-dimensional environment scene of the game outputted by the first terminal).
  • the present application also provides a terminal having a function of realizing the actual terminal behavior in the above method of the third aspect.
  • the functions may be implemented by hardware or by corresponding software implemented by hardware.
  • the hardware or software includes one or more modules corresponding to the functions described above.
  • the terminal includes a data interface, a processor, and a communication module, wherein the data interface is used to acquire target data to be shared by the first terminal, the target data includes at least one frame image;
  • the image in the target data is a two-dimensional image
  • the image in the target data is converted from a two-dimensional image into a three-dimensional image, and virtual reality data converted from the target data is obtained, the virtual The real-time data is used to reflect the three-dimensional scene constructed according to the target data; and the communication module is configured to transmit the virtual reality data to the at least one second terminal.
  • the data interface is specifically configured to obtain environment data of an environment in which the first terminal is currently located, where the environment data includes: an environment image of an environment in which the first terminal is located.
  • the data interface when the data interface is to acquire the target data to be shared by the first terminal, the data interface is specifically configured to acquire video data currently played by the first terminal, where the video data includes at least one frame of video image.
  • the data interface is specifically configured to acquire video data currently playing in a target play window displayed in the first terminal, where the target play window is in the first terminal. Specifies the image output window of the app.
  • FIG. 1 is a schematic structural diagram of a terminal of a terminal of the present application.
  • FIG. 2 is a schematic flow chart of an embodiment of a method for sharing virtual reality data according to the present application
  • FIG. 3 is a schematic flow chart of still another embodiment of a method for sharing virtual reality data according to the present application.
  • FIG. 4 is a schematic structural diagram of an embodiment of an apparatus for sharing virtual reality data according to the present application.
  • FIG. 5 is a schematic structural diagram of an embodiment of an apparatus for sharing virtual reality data according to another application of the present application.
  • the method for sharing virtual reality data in the embodiment of the present application is applicable to sharing virtual reality data between different terminal devices.
  • the terminal may include, but is not limited to, a mobile phone, a mobile computer, a tablet computer, a personal digital assistant (PDA), a media player, a smart wearable device (eg, smart glasses and head-mounted smart phones). Equipment, etc.).
  • the terminal specifically has functions such as running an application (Application, APP) and accessing the network.
  • FIG. 1 is a schematic structural diagram of a part of a terminal 100 related to an embodiment of the present application.
  • the terminal 100 includes components such as a communication module 110, a memory 120, an input device 130, a display 140, a sensor 150, an audio circuit 160, an image capture device 170, and a processor 180.
  • the communication module 110, the memory 120, the input device 130, the display 140, the sensor 150, the audio circuit 160, the image acquisition device 170, and the processor 180 are connected by a communication bus 190.
  • the terminal structure shown in FIG. 1 does not constitute a limitation on the terminal.
  • the terminal may include more or less components than those illustrated, or combine some components, or different. Parts layout.
  • the components of the terminal 100 will be specifically described below with reference to FIG. 1 :
  • the communication module 110 can be used for transmitting and receiving information, such as receiving and transmitting signals.
  • the communication module can be a radio frequency (RF) circuit, and the RF circuit can include but is not limited to an antenna, at least one amplifier. , transceivers, couplers, Low Noise Amplifiers (LNAs), and duplexers.
  • the RF circuit can communicate with the network and other devices through wireless communication.
  • the communication module may further include a data transmission module and a data receiving module to implement reception and transmission of image, audio or video data, such as a communication module, which may include a Bluetooth module, a WiFi module, and the like.
  • the memory 120 can be used to store software programs as well as modules.
  • the memory can also store data such as images, audio, and the like involved in the present application.
  • the memory 120 may mainly include a storage program area and a storage data area, where
  • the stored program area can store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.) and the like.
  • the storage program area may store an image acquisition module for acquiring an environment in which the first terminal is located; an image processing module for processing an environment image of the environment in which the first terminal is located; A data encoding module for encoding data; and a data transmission module for data transmission.
  • the storage data area may store data created according to the use of the terminal 100, such as audio data, image data, and the like.
  • memory 120 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the input device 130 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the terminal 100.
  • the input device 130 may include a touch panel and other input devices.
  • the touch panel also known as a touch screen, can collect touch operations on or near the user (such as the user using a finger, a stylus, or the like, any suitable object or accessory on or near the touch panel), and
  • the corresponding connecting device is driven according to a preset program.
  • other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • Display 140 (which may also be referred to as a display module) may be used to display information entered by the user or information provided to the user as well as various menus of terminal 100.
  • the display 140 may include a display panel.
  • the display panel may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
  • the input device and the display are implemented as two separate components to implement the input and output functions of the terminal 100, in some embodiments, such as a mobile terminal such as a mobile phone, the input device (eg, touch
  • the control panel is integrated with the display panel to implement the input and output functions of the mobile phone.
  • Terminal 100 may also include at least one type of sensor 150, such as a light sensor, motion sensor, and other sensors.
  • the terminal in order to be able to determine the current viewing angle of the middle terminal, the terminal at least includes a sensor capable of acquiring the user's body posture of the user.
  • the audio circuit 160 can be coupled to a speaker and a microphone to provide an audio interface between the user and the terminal 100.
  • the audio circuit 160 can transmit the converted electrical data of the received audio data to the speaker and convert it into a sound signal output by the speaker; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 160 and then converted.
  • the audio data is then output to the communication module 110 for transmission to another terminal, or the audio data is output to the memory 120 for further processing.
  • the image collection device 170 is configured to collect image data of an environment in which the terminal is located, and transmit the image collection data to a processor for processing.
  • image capture device 170 can be a camera. It can be understood that FIG. 1 is an example in which the image collection device is provided in the terminal, but in practical applications, the image acquisition unit is disposed outside the terminal and is equal to the terminal through a line or a wireless network. Connected.
  • the image acquisition device, the audio circuit connected with the speaker and the microphone, and the sensor can be used as a data acquisition module for collecting image, audio and the like data in the present application.
  • the processor 180 is the control center of the terminal 100, connecting various portions of the entire handset with various data interfaces and lines, by running or executing software programs and/or modules stored in the memory 120, and recalling data stored in the memory 120.
  • the various functions and processing data of the terminal 100 are executed to perform overall monitoring of the terminal.
  • the processor obtains the image data collected by the image acquisition unit through the data interface.
  • the data interface can also acquire the audio signal collected by the audio circuit and the sensing signal collected by the sensor, and transmit the audio signal and the sensing signal to the processor.
  • the processor may include a central processing unit (CPU), and may further include: a graphics processing unit (GPU).
  • CPU central processing unit
  • GPU graphics processing unit
  • the processor 180 is configured to: obtain, from the data interface, environment data of a current environment of the terminal, where the environment data includes at least an environment image collected by the image collection device; In the case of a dimensional image, the environmental image in the environmental data is converted from a two-dimensional image into a three-dimensional image, and virtual reality data converted from the environmental data is obtained, and the virtual reality data is used to reflect a three-dimensional scene of the environment in which the terminal is located; The virtual reality data is transmitted to the at least one receiving terminal by the communication module, and the receiving terminal is a terminal for receiving the virtual reality data.
  • the processor can be used to implement related operations performed by the processor in the first terminal in the method for sharing virtual reality data provided by the embodiment shown in FIG. 2 and FIG. 3 described below.
  • the terminal 100 also includes a power source (such as a battery) for powering various components.
  • a power source such as a battery
  • the power source can be logically connected to the processor 180 through the power management system to manage charging, discharging, and power consumption through the power management system. And other functions.
  • the terminal 100 may further include a radio frequency module, a Bluetooth module, and the like, and details are not described herein again.
  • the virtual terminal data corresponding to the environment in which the first terminal is located is shared by the first terminal to the second terminal as an example.
  • the first terminal may be understood as a transmitting terminal that needs to send or share virtual reality data; and the second terminal is a receiving terminal that receives virtual reality data.
  • FIG. 2 is a schematic flowchart diagram of an embodiment of a method for sharing virtual reality data according to the present application.
  • the method in this embodiment may include:
  • the first terminal collects video data of an environment in which the first terminal is located.
  • the video data may be composed of consecutive multi-frame images.
  • each frame image of the environment in which the first terminal is located is referred to as an environment image.
  • an image of the environment in which the first terminal is currently located may be collected by an image capturing device of the first terminal, such as a camera or an external camera connected to the first terminal.
  • the image capturing device in the embodiment of the present application may be an image capturing device such as a camera that collects two-dimensional (2D) images.
  • the collected video data may be a 2D image including multiple frames.
  • the 2D image may also be referred to as a planar image, and refers to an image having two dimensions, such as an image that can be represented by two dimensions (X, Y).
  • the image capturing device may also be an image capturing device such as a camera that collects three-dimensional (3D) images.
  • the 3D image may also be referred to as a stereoscopic image, and refers to an image having three dimensions, for example, by (X, Y). , Z) An image represented by three dimensions.
  • the image capturing device may be a VR camera or the like.
  • the collected video data may be a 3D image including multiple frames.
  • the image acquisition device of the first terminal collects the environment in which the first terminal is located.
  • the audio signal of the environment in which the first terminal is located may be collected through an audio circuit, for example, an audio signal such as a microphone connected to the audio circuit collects an audio signal of an environment in which the first terminal is located, so that the audio circuit can The audio signal is collected.
  • the first terminal may also acquire the video data through the first terminal.
  • User gesture data of a user gesture of a terminal side user the user gesture may include a viewing angle of the user viewing the current environment.
  • the user gesture can be the user's head elevation angle and the like.
  • the video image and the audio data may constitute environment data of an environment where the first terminal side is located.
  • the first terminal detects whether the currently collected environment image belongs to the 3D image, and if yes, the environment image is used as the virtual reality data of the environment where the first terminal is currently located, and performs step S206; if not, step S203 is performed. ;
  • the processor of the first terminal may detect whether the environment image includes depth information, if the depth information is not included, the environment image is a 2D image; if the environment image includes depth information, the environment image belongs to a 3D image. .
  • the difference between the 2D image and the 3D image is that the 2D image lacks depth information.
  • the vertical imaging plane and the straight line passing through the center of the mirror are the Z axis, if the imaged object is in the camera coordinate system.
  • the coordinate is (X, Y, Z), then, wherein the value of the object in the Z axis is the depth information of the object in the imaging plane of the camera, the depth information of the object is not included in the 2D image of the object, and the 3D of the object This depth information is included in the image.
  • the first terminal collects one frame of the environment image at a time, and each time the first terminal collects one frame of the environment image, it is required to detect whether the environment image belongs to the 3D image, and the environment image does not belong to the 3D image. Next, an operation of converting a 2D environment image into a 3D image is performed.
  • the image capturing device such as the camera is generally not switched, and therefore, only the first frame in the video data may be used.
  • the environment image is detected, and the dimensions of each frame environment image included in the video data are determined according to the dimension of the environment image.
  • the present application only introduces the environmental data including the environment image as an example, and the collected environmental data may include audio signals and the like in addition to the environment image, and therefore, if the environmental data is included in the environment data, When the environment image is a 3D image, the environment data may be used as virtual reality data of the environment in which the first terminal is located.
  • the first terminal determines depth information required to convert the environment image from the 2D image to the 3D image;
  • the processor of the first terminal constructs depth information of the environment image to subsequently convert the environment image from the 2D image to the 3D image according to the depth information constructed for the environment image.
  • the first terminal can construct depth information for the 2D environment image.
  • the feature learning method is used to construct the depth information by using the method of relative depth or depth sorting.
  • the depth information required to construct the 2D environment image into a 3D image is not limited.
  • the motion information of the motion object, the environment image, and the latest frame environment image before the environment image may be combined.
  • the depth information corresponding to the environment image is optimized.
  • the analyzing whether the moving object exists in the environment image may be determined by analyzing the environment image and the one or more frame environment images adjacent to the environment image, for example, performing image recognition on the adjacent frame environment image, and determining Whether the subject has a positional offset.
  • S204 The first terminal constructs a 3D environment image corresponding to the 2D environment image according to the depth information constructed for the 2D environment image and the 2D environment image.
  • a 3D image constructed from an environment image is referred to as a 3D environment image.
  • the processor of the first terminal synthesizes the depth information with the environment image, and converts the 2D environment image into a 3D environment image.
  • the 3D environment image constructed by the environment image actually reflects the three-dimensional scene of the environment in which the first terminal is located when the environment image is collected. Therefore, the 3D environment image is actually the first terminal side currently located.
  • the virtual reality data may reflect a three-dimensional scene of the environment in which the first terminal is located.
  • the above is directly taking the processing of the environment image as an example, but it can be understood that if the first terminal collects the environmental data including the environment image, the environment of the environment can be 2D. The image is converted into a 3D environment image, and the virtual reality data converted from the environment data can be obtained.
  • steps S203 and S204 are an implementation manner of converting an environment image from a 2D image to a 3D image. If the environment image is converted from 2D to 3D image by other means, the same applies to the embodiment of the present application. This will not be repeated here.
  • the first terminal performs hole filling on the constructed 3D environment image.
  • the cavity filling is also called void filling.
  • the basic idea used for cavity filling is spatial correlation, that is, using the neighborhood pixels around the image hollow hole to estimate the depth value of the pixel in the hole, so that the estimated depth value is based on The holes are filled.
  • the processor of the first terminal may perform optimization processing such as hole filling on the converted 3D environment image.
  • this step is only for optimizing the 3D effect of the image.
  • the step S205 may not be performed.
  • the first terminal performs VR scene rendering on the 3D environment image to obtain a rendered 3D environment image.
  • the purpose of the VR scene rendering by the processor on the constructed 3D environment image or the acquired 3D environment image is to avoid an abnormal situation such as distortion when the second terminal displays the 3D environment image.
  • the image on the display screen of the terminal is magnified through the lens, the image needs to be distorted to offset the distortion, so that the image displayed by the terminal is projected onto the user's retina.
  • the processing for canceling the distortion of the image outputted by the terminal display screen is anti-distortion, also called reverse distortion, and the anti-distortion is a processing method of VR scene rendering.
  • the terminal will also exhibit this kind of dispersion phenomenon during the process of outputting images.
  • the color path reversible principle can be used to reverse the color. For example, since the light projected from the image passes through the lens, dispersion occurs. Then, before the light of the image enters the lens, the image is dispersed once, so that the image presented through the lens is a normal image.
  • VR glasses When the eyes of normal people look at the same object, the objects are imaged at the retinas of the two eyes, and are superimposed on the center of the brain to become a complete, three-dimensional single object. This function is called binocular single vision.
  • the principle of VR glasses used to view VR images is similar to our eyes.
  • VR glasses generally split the image content into two halves, and superimpose the image through the lens, which often leads to the pupil center of the human eye.
  • the center of the lens and the center of the screen (after the split screen) are not in a straight line, which makes the visual effect very poor, and there are a lot of problems such as unclearness and deformation.
  • the ideal state is that the center of the pupil of the eye, the center of the lens, and the center of the screen (after the split screen) should be in a straight line.
  • the processing on the 3D environment image is called the pitch adjustment.
  • the VR scene rendering may include: performing one or more kinds of processing such as reverse distortion, inverse dispersion, and distance adjustment on the 3D environment image.
  • the collected environmental data which may be referred to as the first environment data for convenience of distinction
  • the first terminal adjusts parameter information of the environment data (eg, adding depth information, performing hole filling, etc.)
  • the environment data is converted into 3D environment data (ie, the second environment data), and the VR scene rendering is performed on the 3D environment data to generate 3D environment data after being rendered by the VR scene (may be referred to as a third environment) Data or third data).
  • the environment data collected by the data collection module ie, the first environment data
  • the first terminal only needs to perform VR scene rendering on the environment data to generate fourth data.
  • the VR scene rendering may be directly performed on the three-dimensional environment image in the virtual reality data.
  • the first terminal determines, according to a current network state between the first terminal and the second terminal, a coding mode applicable to the network state.
  • the network status may reflect the data transmission effect between the first terminal and the second terminal.
  • the network status may include network speed, network signal quality, and the like.
  • the coding mode can be divided into two categories, one is lossy coding mode and the other is lossless coding mode.
  • each type of coding method can be further divided into multiple coding modes.
  • the lossy coding method can include: audio and video interlaced format (AVI) encoding, dynamic image expert group 4 (Moving Picture)
  • the lossless coding method may include: Shannon coding, Huffman coding, Run Length Code (RLC), and the like.
  • the network speed it can be determined which type of coding mode is required, and then an encoding mode included in the coding mode is used for encoding.
  • the processor of the first terminal may use the lossless coding mode as the required coding mode,
  • the image quality encoded by the encoding method is high, so that image data with higher image quality is transmitted to the second terminal without affecting the image data transmission speed.
  • the network status is poor.
  • the network status indicates that the network transmission speed of the first terminal and the second terminal is low, and the first terminal can select the lossy coding mode as the coding mode for encoding the 3D image. It may reduce the time required to transfer compressed data.
  • a poor network state which lossy coding mode is specifically adopted, it can also be set as needed.
  • the first terminal and the second terminal may be pre-established communication connections, for example, an instant communication may be established between the first terminal and the second terminal.
  • the channel transmits the encoded 3D environment image through an instant messaging channel between the first terminal and the second terminal.
  • the network status of the communication connection channel established between the first terminal and the second terminal can be determined.
  • the first terminal may also transmit the 3D environment image to the second terminal by using a mail or the like, and the specific communication manner adopted by the first terminal to transmit the 3D environment image to the second terminal is not limited.
  • the first terminal and the second terminal may determine the network state of the first terminal, and determine the coding mode according to the network state of the first terminal. .
  • determining the coding mode according to the network state is only an implementation manner.
  • the required coding mode may be preset, or the user may select a desired coding mode, which is not limited herein.
  • the first terminal encodes the 3D environment image rendered by the VR scene according to the determined encoding manner, to obtain encoded 3D environment data.
  • the first terminal may use the 3D environment data and the audio data rendered by the VR scene.
  • the data, and the data such as the viewing angle of the user are encoded together to transmit the encoded environment data and the viewing angle of the first terminal side user to the second terminal.
  • the first terminal sends the encoded 3D environment image to the second terminal.
  • the processor of the first terminal sends the encoded 3D environment image to the communication module of the first terminal, and transmits the encoded 3D environment image to the second terminal through the communication module.
  • the network used by the first terminal to transmit the encoded 3D environment image to the first terminal may have multiple possibilities, such as being transmitted to the 3D environment to the second terminal through a wired network or a wireless network such as Bluetooth or wifi. image.
  • the transmission protocol used to transmit the encoded 3D environment image is related to the transmission mode used to transmit the 3D environment image.
  • the first terminal may repeat the above steps S202 to S209 until all the environmental images in the captured video image are processed and then sent. Give the second terminal.
  • the environment image of the current environment of the first terminal when the environment image of the current environment of the first terminal is obtained, if the currently acquired environment image is a two-dimensional image, the environment image is converted into virtual reality data in real time, and the current The converted virtual reality data is transmitted to the second terminal in real time as an example for description.
  • the first terminal after the first terminal ends the collection of the environment image of the environment in which the first terminal is located, the collection is sequentially performed.
  • the obtained environment image is converted into virtual reality data, and the VR scene is rendered by the virtual reality data; then, each 3D image after VR rendering is encoded and transmitted in turn; or all 3D after rendering through the VR scene is performed.
  • the first terminal continuously ingests a multi-frame environment image, but may It is understood that if the first terminal only collects one frame of the environment image, the method can be processed in the manner of the embodiment of the present application, and the process is similar, and details are not described herein again.
  • the environment data includes the 3D environment data after being rendered by the VR scene, and then directly The environmental data is encoded to subsequently transmit the encoded environmental data to the second terminal.
  • the first terminal sends the virtual reality data (the 3D environment image or the video data including the 3D environment image) corresponding to the environment image to a second terminal as an example, but it can be understood that the first terminal can
  • the virtual reality data is sent to the multiple second terminals as needed, and the specific process is similar, and details are not described herein again.
  • the second terminal after receiving the encoded 3D environment image, decodes the encoded 3D environment data to obtain the 3D environment image.
  • the second terminal receives the encoded 3D environment image transmitted by the first terminal through its communication module, and the processor (the processor may call the data decoding module) of the second terminal receives the encoded 3D environment received by the communication module.
  • the image is decoded to decode the 3D environment image.
  • the environment data may be decoded to decode the environment data including the image of the 3D environment.
  • the second terminal acquires a specified viewing angle of view of the current 3D environment image.
  • the second terminal renders the 3D environment image according to the specified viewing, and obtains a 3D environment image presented by the specified viewing angle.
  • the image decoded by the second terminal is a 3D image
  • the image output by the second terminal to the display screen can be an image viewed from any angle of view. Therefore, the processor of the second terminal can first determine one.
  • the viewing angle is specified, and the 3D environment image is rendered according to the specified viewing angle to obtain a 3D environment image presented at the specified viewing angle.
  • the specified viewing angle may be a preset default viewing angle, or may be a viewing angle selected by a user on the second terminal side or selected in real time.
  • the processor of the second terminal can decode the user gesture. data.
  • the processor can determine the viewing angle of the user on the first terminal side according to the user posture data, so that the viewing angle of the user on the first terminal side can be used as the default viewing angle.
  • the second terminal may render the 3D environment image according to the viewing angle of the first terminal side user, so that the second terminal can feel the first The environment scene viewed by the user on the terminal side.
  • the second terminal outputs the 3D environment image presented by the specified viewing angle to the display screen.
  • the processor of the second terminal transmits the 3D environment image rendered according to the specified viewing angle to the display unit of the second terminal to output the 3D environment image through the display unit.
  • the user of the second terminal can view the 3D environment image output by the second terminal side through the virtual reality device, so that the 3D environment can be viewed by the user-related viewing angle of the first terminal.
  • the image; the 3D image can also be viewed from any angle according to its own needs, so that the visual experience of the environment where the user is located on the first terminal side can be experienced.
  • the first terminal may acquire an environment image of an environment in which the first terminal is located, and based on the environment The image constructs a 3D environment image for reflecting a three-dimensional scene of the environment in which the first terminal is located, and transmits the 3D environment image to the at least one second terminal, such that the user of the second terminal outputs the The 3D environment image can feel the visual experience in the environment in which the first terminal is located, thereby experiencing the experience effect of being in the same environment scene as the first terminal.
  • the first terminal shares the virtual reality data corresponding to the environment in which the first terminal is located in the second terminal.
  • the first terminal may also be the first terminal side user.
  • the video image to be viewed is processed as a video image to be shared, and is processed by the method for sharing virtual reality data of the present application, and then shared with the second terminal, so that the user of the second terminal can experience the user watching with the first terminal.
  • the first terminal may also use the video image stored by the first terminal as data to be shared, and process the stored video image into virtual reality data by using the method for sharing virtual reality data of the present application, and transmit the data to the second terminal. .
  • the following describes an example in which the first terminal shares the video image being played by the first terminal to the second terminal.
  • FIG. 3 is a schematic flowchart diagram of still another embodiment of a method for sharing virtual reality data according to the present application.
  • the method in this embodiment may include:
  • the first terminal acquires video data currently played by the first terminal.
  • the video data includes at least a video image, and may further include an audio signal
  • the video data may be a frame image and an audio signal associated with the frame image, such as video data composed of a frame of video image currently being played in the video file played by the terminal and the output audio signal.
  • the video data may also be a video file, where the video text includes a multi-frame video image and an audio signal associated with the multi-frame video file.
  • the first terminal may sequentially process each frame in the video file. The processing of the video image is similar to the process of this embodiment, and details are not described herein again.
  • the first terminal may acquire video data currently playing in the target play window displayed by the first terminal, where the target play window may be an image output window of the specified application in the first terminal.
  • the specified application may be a game application
  • the target play window is a game window for outputting a game screen for the game application.
  • the video data acquired by the first terminal may be game data
  • the game data includes a game screen.
  • the sound signal in the game, etc. by the virtual reality sharing method of the embodiment, the game screen in the game data can be converted into virtual reality data for sharing to the user of the second terminal, so that the user of the second terminal The three-dimensional scene corresponding to the game screen played on the first terminal side can be viewed.
  • the specified application may be a player
  • the target play window may be a video play window of the player.
  • the first terminal may acquire a video image played by a video play window of the player
  • the virtual reality sharing method of the embodiment is implemented to convert the video image played by the player into virtual reality data, and share the data to the user of the second terminal, so that the user of the second terminal can feel the first terminal.
  • the specified application can also be an application having an image output function such as a browser, and is not limited herein.
  • the first terminal acquires a currently viewed video image, and may also sense, by using a device such as a sensor, a viewing angle of the video image viewed by the first terminal side user, so that the second terminal side can View the video image at the same viewing angle.
  • a device such as a sensor
  • the first terminal detects whether the video image in the video data belongs to the 3D image, and if so, the video data is used as the virtual reality data currently played by the first terminal, and step S306 is performed; if not, step S303 is performed;
  • the video data includes multiple frames of video images
  • 3D images For the operation of 3D images.
  • the first terminal determines depth information required to convert the video image in the video data from the 2D image to the 3D image;
  • the motion information of the moving object and the latest frame image in front of the video image in the video image may be combined to correspond to the environment image.
  • the depth information is optimized.
  • the first terminal converts the video image in the video data into a 3D video image according to the depth information corresponding to the video image and the video image, to obtain virtual reality data converted by the video data.
  • the 3D image converted from the video image is referred to as a 3D video image
  • the video data including the 3D environment image is actually a three-dimensional video data
  • the converted video data may be referred to as virtual
  • the real data is used to reflect a three-dimensional scene of the video image currently played by the first terminal.
  • the first terminal performs hole filling and optimization on the 3D video image in the virtual reality data.
  • this step is only for optimizing the 3D effect of the image.
  • the step S305 may not be performed.
  • the first terminal performs VR scene rendering on the 3D video image in the virtual reality data to obtain the rendered virtual reality data.
  • the VR scene rendering includes: performing one or more kinds of processing such as anti-distortion, anti-dispersion, and interpupillary adjustment on the 3D environment image.
  • the process of converting the video image into the 3D video image is similar to the process of converting the environment image into the 3D environment image in the previous embodiment. For details, refer to the related description of the previous embodiment, and details are not described herein again.
  • the first terminal determines, according to a current network state between the first terminal and the second terminal, a coding mode applicable to the network state.
  • determining the coding mode according to the network state is only an implementation manner.
  • the required coding mode may be preset, or the user may select a desired coding mode, which is not limited herein.
  • the first terminal encodes the virtual reality data that is rendered by the VR scene according to the determined encoding manner, to obtain the encoded virtual reality data.
  • the virtual reality data may be encoded together with the viewing angle of the user, so as to be encoded later.
  • the virtual reality data and the viewing angle are sent together to the second terminal.
  • the process of encoding and transmitting virtual reality data can be referred to the related process in the previous embodiment, and is no longer Narration.
  • the first terminal sends the encoded virtual reality data to the second terminal.
  • the second terminal after receiving the encoded virtual reality data, decodes the encoded virtual reality data to obtain virtual reality data including a 3D video image.
  • the second terminal acquires a specified viewing angle of the current viewing of the 3D video image.
  • the second terminal renders the 3D video image according to the specified viewing, and obtains a 3D video image presented by the specified viewing angle.
  • the specified viewing angle may be a preset default viewing angle, or may be a viewing angle selected by a user on the second terminal side or selected in real time.
  • the communication module of the second terminal receives the viewing angle of the video image viewed by the first terminal side user, the viewing angle can be used as a default. Watch the perspective.
  • the second terminal outputs the 3D video image presented by the specified viewing angle to the display screen.
  • the first terminal sends the 3D image corresponding to the video image currently played by the first terminal to the at least one second terminal, so that if the video image viewed by the user of the first terminal is a 3D video image, Then, sharing the 3D video image to the second terminal, the user of the second terminal can experience the same viewing experience as the user of the first terminal; and if the video image viewed by the user of the first terminal is a 2D video image, The video image played by the first terminal side can be converted into a 3D image and shared to the second terminal by using the solution of the present application, so that the sharing of the 3D image is realized, so that the user of the second terminal can simultaneously view the first terminal side.
  • the virtual reality data corresponding to the video image viewed by the user.
  • the present application further provides a computer readable storage medium having stored therein instructions for causing the terminal to perform any of the sharing of virtual reality data as described above when the instruction is run on the terminal.
  • the present application also provides a computer program product comprising instructions for causing a terminal to perform a method of sharing virtual reality data as described above when the computer program product is run on a terminal.
  • the present application also provides an apparatus for sharing virtual reality data, which can be applied to the aforementioned terminal for transmitting virtual reality data.
  • FIG. 4 it is a schematic structural diagram of an embodiment of an apparatus for sharing virtual reality data according to the present application.
  • the apparatus of this implementation may include:
  • the data collection module 401 is configured to obtain environment data of an environment in which the terminal is located, where the environment data includes at least an environment image.
  • the environmental data can be understood as video data composed of at least one frame of environment image.
  • the data acquisition module acquires an environment image of an environment in which the terminal is collected by an image acquisition device such as a camera (a camera that captures a two-dimensional image or a camera that collects VR data).
  • the image processing module 402 is configured to convert the environment image in the environment data from a two-dimensional image into a three-dimensional image, and obtain virtual reality data converted by the environment data, where the environment image is a two-dimensional image.
  • Virtual reality number According to a three-dimensional scene for reflecting the environment in which the terminal is located.
  • the data transmission module 403 is configured to transmit the virtual reality data to the at least one receiving terminal.
  • the environment data acquired by the data collection device may further include: audio data collected through a microphone or the like on the terminal, and user posture and the like sensed by the sensor.
  • the acquired environmental data and user posture data may be collectively referred to as user environment data.
  • the user gesture may be a viewing angle of the environment in which the first terminal is located, and correspondingly, the data transmission module 403 may transmit the virtual reality data while transmitting the user posture data, so that the receiving device can
  • the virtual reality data is rendered according to the viewing angle, and the virtual reality data presented at the viewing angle is output.
  • the image processing module may further detect whether the collected data is 3D data.
  • the process of the image processing module converting the environment image in the environment data into the 3D image may be: determining the depth information corresponding to the 2D environment data, and constructing the acquired depth information and the 2D environment data. Out of 3D environment data. Among them, the motion information and the inter-frame information are used to optimize the depth information when processing the video with motion information.
  • the image processing module can also perform hole filling and optimization on the 3D environment data.
  • the 3D environment data may also be rendered, that is, the VR scene rendering mentioned above is performed.
  • the image processing module may further determine the environment data as virtual reality data to directly render the 3D environment data VR scene.
  • the apparatus may further include: a data encoding module 404, configured to determine an encoding mode based on a network state of the terminal; and encode the virtual reality data according to the encoding mode.
  • the data encoding module can adaptively select the most suitable coding mode according to the network state, and encode the 3D environment data according to the selected coding mode, so as to improve the speed and reliability of the subsequent transmission of the virtual reality data.
  • the present application further provides another apparatus for sharing virtual reality data, the apparatus being applicable to a receiving terminal that receives the virtual reality data.
  • FIG. 5 a schematic structural diagram of another embodiment of an apparatus for sharing virtual reality data is shown.
  • the apparatus of this embodiment may include:
  • the data receiving module 501 is configured to receive virtual reality data from the sending terminal.
  • the data receiving module 501 can establish a transmission connection with the data transmission module of the transmitting terminal by using a transmission protocol.
  • the display module 502 is configured to display the virtual reality data.
  • the display module to which the display module outputs the virtual reality data includes but is not limited to a mobile phone screen, and the VR helmet has a display screen.
  • the apparatus further includes a data decoding module 503.
  • the data receiving module transmits the virtual reality data to the data decoding module.
  • the data decoding module is configured to decode the virtual reality data, and transmit the decoded virtual reality data to the display module for display.
  • the steps of a method or algorithm described in connection with the embodiments disclosed herein can be implemented directly in hardware, a software module executed by a processor, or a combination of both.
  • the software module can be placed in random access memory (RAM), memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or technical field. Any other form of storage medium known.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Information Transfer Between Computers (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present application relates to the technical field of communications, especially relates to the technology of virtual reality. A terminal obtains environment data including an environment image of a current environment, and converts the environment image in the environment data from a two-dimensional image into a three-dimensional image in the case that the environment image is a two-dimensional image, to obtain virtual reality data that reflects a three-dimensional scene of the environment of the terminal, so that after the virtual reality data is transmitted to at least one receiving terminal, a user of the receiving terminal may feel as if he/she is in the environment where the first terminal is located by viewing the virtual reality data output by the receiving terminal, thereby achieving the experience effect that multiple users share a same scene.

Description

一种共享虚拟现实数据的方法和设备Method and device for sharing virtual reality data
本申请要求于2016年12月27日提交中国专利局、申请号为201611224693.8、发明名称为“一种共享虚拟现实数据的方法和设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。The present application claims priority to Chinese Patent Application No. 201611224693.8, entitled "A Method and Apparatus for Sharing Virtual Reality Data", filed on December 27, 2016, the entire contents of which are incorporated by reference. In this application.
技术领域Technical field
本申请涉及通信技术领域,尤其涉及虚拟现实技术。The present application relates to the field of communication technologies, and in particular, to virtual reality technology.
背景技术Background technique
虚拟现实(Virtual Reality,VR)技术是指利用计算机模拟产生一个三维空间的虚拟世界,提供使用者关于视觉、触觉等感官的模拟,让使用者如同身临其境一般,可以及时、没有限制的观察三维空间内的事物。Virtual Reality (VR) technology refers to the use of computer simulation to generate a virtual world in a three-dimensional space, providing users with simulations of visual, tactile and other senses, allowing users to be as immersive as they can, in a timely and unrestricted manner. Observe things in three dimensions.
目前,虚拟现实技术已经被应用于游戏、电影等诸多领域。但是在应用虚拟现实的诸多领域中,虚拟现实的体验功能较为单一,仅仅能够满足单侧用户的观看需求。然而,随着网络以及社交应用的不断发展,网络用户之间的交互需求日益增多,因此,如何实现多个用户可以体验到处于相同场景中的效果,是本领域技术人员迫切需要解决的技术问题。At present, virtual reality technology has been applied to many fields such as games and movies. However, in many fields of applying virtual reality, the virtual reality experience function is relatively simple, and can only meet the viewing needs of single-sided users. However, with the continuous development of the network and the social application, the interaction between the network users is increasing. Therefore, how to realize the effect that multiple users can experience the same scene is a technical problem that the technical personnel urgently need to solve. .
发明内容Summary of the invention
有鉴于此,本申请提供了一种共享虚拟现实数据的方法和设备,以实现多个用户之间相同场景的虚拟现实数据的共享,使得处于不同地方的多个用户可以体验到处于相同场景中的体验效果。In view of this, the present application provides a method and device for sharing virtual reality data, so as to realize sharing of virtual reality data of the same scene among multiple users, so that multiple users in different places can experience being in the same scene. The effect of the experience.
一方面,本申请提供一种共享虚拟现实数据的方法,在该方法中,第一终端采集当前所处环境的环境数据,该环境数据至少包括该第一终端所处环境的环境图像;在该环境图像为二维图像的情况下,将该环境数据中的环境图像由二维图像转换为三维图像,从而将该环境数据转换为用于反映所述第一终端所处环境的三维场景的虚拟现实数据,这样,在将该虚拟现实数据传输给至少一个第二终端之后,第二终端的用户通过观看该第二终端输出的虚拟现实数据,可以感受到如同身处该第一终端所处的环境中的视觉体验,从而体验到与第一终端处于相同环境场景的体验效果。In one aspect, the present application provides a method for sharing virtual reality data, in which the first terminal collects environmental data of an environment in which the current terminal is located, and the environment data includes at least an environment image of an environment in which the first terminal is located; In the case where the environment image is a two-dimensional image, the environment image in the environment data is converted from a two-dimensional image into a three-dimensional image, thereby converting the environment data into a virtual one for reflecting a three-dimensional scene in which the first terminal is located Real data, such that after transmitting the virtual reality data to the at least one second terminal, the user of the second terminal can feel that the first terminal is located by watching the virtual reality data output by the second terminal The visual experience in the environment, thus experiencing the experience of the same environment scene as the first terminal.
在一种可能的设计中,将所述环境数据中的所述环境图像由二维图像转换为三维图像,可以是为该环境数据中的环境图像创建深度信息,然后,利用该深度信息和该环境图像,构建该环境图像对应的三维图像,这样,将环境数据中二维的环境图像替换为三维的环境图像,便实现了将环境数据转换为虚拟显示数据。In a possible design, converting the environment image in the environment data from a two-dimensional image to a three-dimensional image may be to create depth information for an environment image in the environment data, and then using the depth information and the The environment image is constructed by constructing a three-dimensional image corresponding to the environment image, so that the environment data is converted into virtual display data by replacing the two-dimensional environment image in the environment data with the three-dimensional environment image.
在一种可能的设计中,在该环境数据中的环境图像为三维图像的情况下,可以直接将该环境数据确定为用于反映该第一终端所处环境的三维场景的虚拟现实数据,并将该虚拟现实数据传输给至少一个第二终端,从而实现了将反映第一终端所处环境的三维场景的数据共享给至少一个第二终端,使得第二终端的用户基于该虚拟现实数据可以体验到与第一终端的用户处于相同环境中的体验。 In a possible design, in a case where the environment image in the environment data is a three-dimensional image, the environment data may be directly determined as virtual reality data for reflecting a three-dimensional scene of an environment in which the first terminal is located, and Transmitting the virtual reality data to the at least one second terminal, so that data sharing the three-dimensional scene of the environment in which the first terminal is located is shared to the at least one second terminal, so that the user of the second terminal can experience based on the virtual reality data. The experience in the same environment as the user of the first terminal.
在一种可能的设计中,在第一终端得到虚拟现实数据之后,第一终端还可以对虚拟现实数据中三维的环境图像进行虚拟现实场景渲染,该虚拟现实场景渲染可以包括:反向畸变、反色散和瞳距调节等一种或几种。通过对虚拟现实数据进行虚拟现实场景渲染,可以减少后续第二终端展现虚拟现实数据的过程中,出现图像失真等异常情况。In a possible design, after the first terminal obtains the virtual reality data, the first terminal may further perform virtual reality scene rendering on the three-dimensional environment image in the virtual reality data, where the virtual reality scene rendering may include: reverse distortion, One or several of anti-dispersion and interpupillary adjustment. By performing virtual reality scene rendering on the virtual reality data, an abnormal situation such as image distortion occurs in the process of displaying the virtual reality data by the subsequent second terminal.
在一种可能的设计中,在第一终端采集第一终端当前所处环境的环境数据的同时,还可以采集该第一终端侧的用户对所述环境的观看视角;相应的,在第一终端将虚拟现实数据传输给至少第二终端的同时,该第一终端还可以将该观看视角发送给该至少一个第二终端,以便该第二终端按照该观看视角渲染虚拟现实数据,并输出以观看视角呈现出的虚拟现实数据,从而使得第二终端的用户可以与第一终端的用户以相同的视角观看该第一终端所处的环境场景。In a possible design, while the first terminal collects the environment data of the environment in which the first terminal is currently located, the viewing angle of the user on the first terminal side may also be collected; correspondingly, at the first And transmitting, by the terminal, the virtual reality data to the at least one second terminal, the first terminal may further send the viewing angle to the at least one second terminal, so that the second terminal renders the virtual reality data according to the viewing angle, and outputs the The virtual reality data presented by the viewing angle is viewed, so that the user of the second terminal can view the environment scene where the first terminal is located at the same perspective as the user of the first terminal.
在一种可能的设计中,为了适应数据传输协议,在第一终端将所述虚拟现实数据传输给至少一个第二终端之前,该第一终端还可以对虚拟现实数据进行编码。In a possible design, in order to adapt to the data transmission protocol, the first terminal may further encode the virtual reality data before the first terminal transmits the virtual reality data to the at least one second terminal.
进一步的,为了提高数据传输的速度以及可靠性,确定该第一终端的网络状态;并基于所述网络状态确定编码模式,按照所述编码模式对所述虚拟现实数据进行编码。又一方面,本申请还提供了一种终端,该终端具有实现上述方法中实际终端行为的功能。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的模块。Further, in order to improve the speed and reliability of the data transmission, determine the network state of the first terminal; and determine an encoding mode based on the network state, and encode the virtual reality data according to the encoding mode. In still another aspect, the present application also provides a terminal having a function of implementing actual terminal behavior in the above method. The functions may be implemented by hardware or by corresponding software implemented by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
在一个可能的设计中,该终端中包括图像采集装置、数据接口、处理器以及通信模块。其中,图像采集装置用于采集终端当前所处环境的环境图像;数据接口,用于获取该终端当前所处环境的环境数据,该环境数据至少包括:图像采集装置采集到的环境图像;该处理器被配置为支持终端执行上述方法中相应的功能,如该处理器可以被配置为用于在所述环境图像为二维图像的情况下,将所述环境数据中的所述环境图像由二维图像转换为三维图像,得到由所述环境数据转换出的虚拟现实数据,所述虚拟现实数据用于反映所述终端所处环境的三维场景;该通信模块,用于将所述虚拟现实数据传输给至少一个接收终端。In one possible design, the terminal includes an image acquisition device, a data interface, a processor, and a communication module. The image collection device is configured to collect an environment image of an environment in which the terminal is currently located; and the data interface is configured to obtain environment data of the current environment of the terminal, where the environment data includes at least: an environment image collected by the image collection device; The device is configured to support the terminal to perform a corresponding function in the above method, as the processor may be configured to use the environment image in the environment data by two if the environment image is a two-dimensional image Converting the dimensional image into a three-dimensional image, and obtaining virtual reality data converted by the environmental data, the virtual reality data is used to reflect a three-dimensional scene of an environment in which the terminal is located; the communication module is configured to use the virtual reality data Transfer to at least one receiving terminal.
在一种可能的设计中,该处理器,还用于在所述环境图像为三维图像的情况下,将所述环境数据确定为所述用于反映所述终端所处环境的三维场景的虚拟现实数据。In a possible design, the processor is further configured to determine the environment data as the virtual space for reflecting a three-dimensional scene of an environment in which the terminal is located, if the environment image is a three-dimensional image. Realistic data.
在一种可能的设计中,所述处理器还用于,在得到虚拟现实数据之后,对所述虚拟现实数据中三维的环境图像进行虚拟现实场景渲染,所述虚拟现实场景渲染包括:反向畸变、反色散和瞳距调节中的一种或几种。In a possible design, the processor is further configured to: after obtaining the virtual reality data, perform virtual reality scene rendering on the three-dimensional environment image in the virtual reality data, where the virtual reality scene rendering includes: One or more of distortion, inverse dispersion, and interpupillary adjustment.
在一种可能的设计中,该终端还可以包括:传感器,用于感应所述终端侧的用户对所述环境的观看视角;In a possible design, the terminal may further include: a sensor, configured to sense a viewing angle of the environment of the user on the terminal side;
该数据接口,还用于所述传感器采集到的所述终端侧的用户对所述环境的观看视角;The data interface is further used for viewing the viewing angle of the environment by the user on the terminal side collected by the sensor;
该通信模块,还用于在所述将所述虚拟现实数据传输给至少一个第二终端的同时,将所述观看视角发送给所述至少一个第二终端,以便所述第二终端按照该观看视角渲染所述虚拟现实数据,并输出以所述观看视角呈现出的虚拟现实数据。The communication module is further configured to send the viewing angle to the at least one second terminal while the virtual reality data is transmitted to the at least one second terminal, so that the second terminal follows the viewing The perspective view renders the virtual reality data and outputs virtual reality data presented at the viewing angle.
在一种可能的设计中,该处理器还用于,在通信模块将该虚拟现实数据传输给至少一个接收终端之前,确定该终端的网络状态;基于该网络状态确定编码模式;按照该编码模 式对该虚拟现实数据进行编码。In a possible design, the processor is further configured to: before the communication module transmits the virtual reality data to the at least one receiving terminal, determine a network state of the terminal; determine an encoding mode based on the network state; The virtual reality data is encoded.
本申请实施例的第二方面和第一方面的设计思路一致,技术手段类似,技术方案带来的具体有益效果请参考第一方面,不再赘述。The second aspect of the embodiment of the present application is consistent with the design of the first aspect, and the technical means are similar. For the specific beneficial effects brought by the technical solution, please refer to the first aspect, and details are not described herein.
又一方面,本申请还提供了另一种共享虚拟现实数据的方法,在该方法中,终端获取第一终端待分享的目标数据,该目标数据包括至少一帧图像;在该目标数据中的图像为二维图像的情况下,将目标数据中的图像由二维图像转换为三维图像,得到由该目标数据转换出的虚拟现实数据,由于该虚拟现实数据可以反映依据该目标数据所构建出的三维场景,这样,在将虚拟现实数据传输给至少一个第二终端之后,第二终端的用户可以在该第二终端上观看该虚拟现实数据,实现了虚拟现实数据的共享;而且,第二终端的用户通过该虚拟现实数据能够更为真实的体验到该第一终端侧所播放的图像所对应的真实环境场景。In another aspect, the present application further provides another method for sharing virtual reality data, in which the terminal acquires target data to be shared by the first terminal, the target data includes at least one frame image, and in the target data. When the image is a two-dimensional image, the image in the target data is converted from the two-dimensional image into a three-dimensional image, and the virtual reality data converted by the target data is obtained, and the virtual reality data can be reflected according to the target data. a three-dimensional scene, such that after transmitting the virtual reality data to the at least one second terminal, the user of the second terminal can view the virtual reality data on the second terminal, thereby realizing sharing of the virtual reality data; and, second The user of the terminal can experience the real environment scene corresponding to the image played by the first terminal side through the virtual reality data.
在一种可能的设计中,终端获取第一终端侧待分享的目标数据可以为获取第一终端当前所处环境的环境数据,该环境数据包括:第一终端所处环境的环境图像。In a possible design, the terminal acquiring the target data to be shared by the first terminal may be the environment data of the environment in which the first terminal is located, and the environment data includes: an environment image of the environment in which the first terminal is located.
在一种可能的设计中,终端获取第一终端待分享的目标数据可以为获取第一终端当前播放的视频数据,该视频数据包括至少一帧视频图像。In a possible design, the acquiring, by the terminal, the target data to be shared by the first terminal may be acquiring video data currently being played by the first terminal, where the video data includes at least one frame of video image.
在一种可能的设计中,终端获取所述第一终端当前播放的视频数据可以为获取所述第一终端中展现出的目标播放窗口(如游戏应用的游戏窗口、浏览器的视频展现窗口、播放器等应用的播放窗口)中当前播放的视频数据,将该目标窗口播放的视频数据所转换出的虚拟现实数据发送给第二终端之后,第二终端的用户通过观看该第二终端输出的虚拟现实数据,可以使得用户如同身处该第一终端中该目标数据所对应的三维环境场景中(如,感受身处该第一终端输出的游戏的三维环境场景中)。In a possible design, the terminal acquiring the video data currently played by the first terminal may be acquiring a target play window displayed in the first terminal (such as a game window of a game application, a video display window of a browser, After the video data currently being played in the play window of the player or the like is sent to the second terminal by the video data converted by the video data played by the target window, the user of the second terminal outputs the output of the second terminal. The virtual reality data may be such that the user is in a three-dimensional environment scene corresponding to the target data in the first terminal (eg, in a three-dimensional environment scene of the game outputted by the first terminal).
又一方面,本申请还提供了一种终端,该终端具有实现第三方面的上述方法中实际终端行为的功能。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的模块。In still another aspect, the present application also provides a terminal having a function of realizing the actual terminal behavior in the above method of the third aspect. The functions may be implemented by hardware or by corresponding software implemented by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
在一种可能的设计中,该终端包括数据接口、处理器以及通信模块,其中,数据接口用于获取第一终端待分享的目标数据,所述目标数据包括至少一帧图像;处理器,用于在所述目标数据中的图像为二维图像的情况下,将所述目标数据中的图像由二维图像转换为三维图像,得到由所述目标数据转换出的虚拟现实数据,所述虚拟现实数据用于反映依据所述目标数据所构建出的三维场景;通信模块,用于将所述虚拟现实数据传输给至少一个第二终端。In a possible design, the terminal includes a data interface, a processor, and a communication module, wherein the data interface is used to acquire target data to be shared by the first terminal, the target data includes at least one frame image; In the case that the image in the target data is a two-dimensional image, the image in the target data is converted from a two-dimensional image into a three-dimensional image, and virtual reality data converted from the target data is obtained, the virtual The real-time data is used to reflect the three-dimensional scene constructed according to the target data; and the communication module is configured to transmit the virtual reality data to the at least one second terminal.
在一种可能的设计中,所述数据接口具体用于,获取第一终端当前所处环境的环境数据,所述环境数据包括:第一终端所处环境的环境图像。In a possible design, the data interface is specifically configured to obtain environment data of an environment in which the first terminal is currently located, where the environment data includes: an environment image of an environment in which the first terminal is located.
在一种可能的设计中,所述数据接口在获取第一终端待分享的目标数据时,具体用于获取所述第一终端当前播放的视频数据,所述视频数据包括至少一帧视频图像。In a possible design, when the data interface is to acquire the target data to be shared by the first terminal, the data interface is specifically configured to acquire video data currently played by the first terminal, where the video data includes at least one frame of video image.
在一种可能的设计中,所述数据接口具体用于,获取所述第一终端中展现出的目标播放窗口中当前播放的视频数据,其中,所述目标播放窗口为所述第一终端中指定应用的图像输出窗口。In a possible design, the data interface is specifically configured to acquire video data currently playing in a target play window displayed in the first terminal, where the target play window is in the first terminal. Specifies the image output window of the app.
本申请实施例的第四方面和第三方面的设计思路一致,技术手段类似,技术方案带来 的具体有益效果请参考第三方面,不再赘述。The design ideas of the fourth aspect and the third aspect of the embodiments of the present application are the same, the technical means are similar, and the technical solution brings Please refer to the third aspect for the specific benefits. I will not repeat them.
附图说明DRAWINGS
为了更清楚地说明本申请实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单的介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the present application, Those skilled in the art can also obtain other drawings based on these drawings without paying for creative labor.
图1示出了本申请一种终端的组成架构示意图;1 is a schematic structural diagram of a terminal of a terminal of the present application;
图2示出了本申请一种共享虚拟现实数据的方法一个实施例的流程示意图;2 is a schematic flow chart of an embodiment of a method for sharing virtual reality data according to the present application;
图3示出了本申请一种共享虚拟现实数据的方法又一个实施例的流程示意图;3 is a schematic flow chart of still another embodiment of a method for sharing virtual reality data according to the present application;
图4示出了本申请一种共享虚拟现实数据的装置一个实施例的组成结构示意图;4 is a schematic structural diagram of an embodiment of an apparatus for sharing virtual reality data according to the present application;
图5示出了本申请又一种共享虚拟现实数据的装置一个实施例的组成结构示意图。FIG. 5 is a schematic structural diagram of an embodiment of an apparatus for sharing virtual reality data according to another application of the present application.
具体实施方式detailed description
本申请实施例的共享虚拟现实数据的方法适用于不同终端设备之间共享虚拟现实数据。The method for sharing virtual reality data in the embodiment of the present application is applicable to sharing virtual reality data between different terminal devices.
在本申请实施例中,终端可以包括但不限于手机、移动电脑、平板电脑、个人数字助理(Personal Digital Assistant,PDA)、媒体播放器、智能可穿戴设备(如,智能眼镜和头戴式智能设备等)等设备。该终端具体有运行应用程序(Application,APP)、接入网络等功能。In the embodiment of the present application, the terminal may include, but is not limited to, a mobile phone, a mobile computer, a tablet computer, a personal digital assistant (PDA), a media player, a smart wearable device (eg, smart glasses and head-mounted smart phones). Equipment, etc.). The terminal specifically has functions such as running an application (Application, APP) and accessing the network.
如图1,为本申请实施例相关的终端100的部分组成结构示意图。FIG. 1 is a schematic structural diagram of a part of a terminal 100 related to an embodiment of the present application.
参考图1,终端100包括:通信模块110、存储器120、输入装置130、显示器140、传感器150、音频电路160、图像采集装置170以及处理器180等部件。其中,通信模块110、存储器120、输入装置130、显示器140、传感器150、音频电路160、图像采集装置170以及处理器180通过通信总线190相连。Referring to FIG. 1, the terminal 100 includes components such as a communication module 110, a memory 120, an input device 130, a display 140, a sensor 150, an audio circuit 160, an image capture device 170, and a processor 180. The communication module 110, the memory 120, the input device 130, the display 140, the sensor 150, the audio circuit 160, the image acquisition device 170, and the processor 180 are connected by a communication bus 190.
本领域技术人员可以理解,图1中示出的终端结构并不构成对终端的限定,在实际中,该终端可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。It will be understood by those skilled in the art that the terminal structure shown in FIG. 1 does not constitute a limitation on the terminal. In practice, the terminal may include more or less components than those illustrated, or combine some components, or different. Parts layout.
下面结合图1对终端100的各个构成部件进行具体的介绍:The components of the terminal 100 will be specifically described below with reference to FIG. 1 :
通信模块110可用于收发信息,如信号的接收和发送等,如,终端为手机时,该通信模块可以为射频(Radio Frequency,RF)电路,该RF电路可以包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)以及双工器等。此外,RF电路还可以通过无线通信与网络和其他设备通信。该通信模块还可以包括数据传输模块和数据接收模块,以实现图像、音频或者视频数据的接收以及发送,如通信模块可以包括蓝牙模块、WiFi模块等等。The communication module 110 can be used for transmitting and receiving information, such as receiving and transmitting signals. For example, when the terminal is a mobile phone, the communication module can be a radio frequency (RF) circuit, and the RF circuit can include but is not limited to an antenna, at least one amplifier. , transceivers, couplers, Low Noise Amplifiers (LNAs), and duplexers. In addition, the RF circuit can communicate with the network and other devices through wireless communication. The communication module may further include a data transmission module and a data receiving module to implement reception and transmission of image, audio or video data, such as a communication module, which may include a Bluetooth module, a WiFi module, and the like.
存储器120可用于存储软件程序以及模块。该存储器还可以存储本申请所涉及的图像、音频等数据。The memory 120 can be used to store software programs as well as modules. The memory can also store data such as images, audio, and the like involved in the present application.
在一种可能的实现方式中,该存储器120可主要包括存储程序区和存储数据区,其中, 存储程序区可存储操作系统、至少一个功能(比如声音播放功能、图像播放功能等)所需的应用程序等。在本申请实施例中,该存储程序区可以存储用于获取第一终端所处环境的图像采集模块;用于处理该第一终端所处环境的环境图像的图像处理模块;用于对待传输的数据进行编码的数据编码模块;以及用于数据传输的数据传输模块。In a possible implementation, the memory 120 may mainly include a storage program area and a storage data area, where The stored program area can store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.) and the like. In the embodiment of the present application, the storage program area may store an image acquisition module for acquiring an environment in which the first terminal is located; an image processing module for processing an environment image of the environment in which the first terminal is located; A data encoding module for encoding data; and a data transmission module for data transmission.
其中,该存储数据区可存储根据终端100的使用所创建的数据,比如,音频数据、图像数据等等。The storage data area may store data created according to the use of the terminal 100, such as audio data, image data, and the like.
此外,存储器120可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。Moreover, memory 120 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
输入装置130可用于接收输入的数字或字符信息,以及产生与终端100的用户设置以及功能控制有关的键信号输入。如,以终端为手机为例,该输入装置130可以包括触控面板以及其他输入设备。触控面板也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板上或在触控面板附近的操作),并根据预先设定的程序驱动相应的连接装置。具体地,其他输入设备可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。The input device 130 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the terminal 100. For example, taking the terminal as a mobile phone as an example, the input device 130 may include a touch panel and other input devices. The touch panel, also known as a touch screen, can collect touch operations on or near the user (such as the user using a finger, a stylus, or the like, any suitable object or accessory on or near the touch panel), and The corresponding connecting device is driven according to a preset program. Specifically, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
显示器140(也可以称为显示模块),可用于显示由用户输入的信息或提供给用户的信息以及终端100的各种菜单。显示器140可包括显示面板,在一种可能的情况中,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板。虽然在图1中,输入装置与显示器是作为两个独立的部件来实现终端100的输入和输出功能,但是在某些实施例中,如,手机等移动终端,可以将输入装置(如,触控面板)与显示面板集成,而实现手机的输入和输出功能。Display 140 (which may also be referred to as a display module) may be used to display information entered by the user or information provided to the user as well as various menus of terminal 100. The display 140 may include a display panel. In one possible case, the display panel may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like. Although in FIG. 1, the input device and the display are implemented as two separate components to implement the input and output functions of the terminal 100, in some embodiments, such as a mobile terminal such as a mobile phone, the input device (eg, touch The control panel is integrated with the display panel to implement the input and output functions of the mobile phone.
终端100还可包括至少一种传感器150,比如光传感器、运动传感器以及其他传感器。特别的,为了能够确定中终端当前的观看视角,终端至少包括能够获取用户感应用户身体姿态的传感器。 Terminal 100 may also include at least one type of sensor 150, such as a light sensor, motion sensor, and other sensors. In particular, in order to be able to determine the current viewing angle of the middle terminal, the terminal at least includes a sensor capable of acquiring the user's body posture of the user.
音频电路160可以连接有扬声器和麦克风,从而提供用户与终端100之间的音频接口。音频电路160可将接收到的音频数据转换后的电信号,传输到扬声器,由扬声器转换为声音信号输出;另一方面,麦克风将收集的声音信号转换为电信号,由音频电路160接收后转换为音频数据,再将音频数据输出至通信模块110以发送给另一终端,或者将音频数据输出至存储器120以便进一步处理。The audio circuit 160 can be coupled to a speaker and a microphone to provide an audio interface between the user and the terminal 100. The audio circuit 160 can transmit the converted electrical data of the received audio data to the speaker and convert it into a sound signal output by the speaker; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 160 and then converted. For audio data, the audio data is then output to the communication module 110 for transmission to another terminal, or the audio data is output to the memory 120 for further processing.
图像采集装置170用于采集终端所处环境的图像数据,并将图像采集数据传输给处理器进行处理。如,图像采集装置170可以为摄像头。可以理解的是,图1是是以该终端设置有该图像采集装置为例进行介绍,但是在实际应用中,该图像采集单元是设置于终端的外部,并通过线路或者无线网络等于该终端实现相连。The image collection device 170 is configured to collect image data of an environment in which the terminal is located, and transmit the image collection data to a processor for processing. For example, image capture device 170 can be a camera. It can be understood that FIG. 1 is an example in which the image collection device is provided in the terminal, but in practical applications, the image acquisition unit is disposed outside the terminal and is equal to the terminal through a line or a wireless network. Connected.
其中,该图像采集装置、连接有扬声器和麦克风的音频电路以及传感器可以作为本申请采集图像、音频等数据的数据采集模块。The image acquisition device, the audio circuit connected with the speaker and the microphone, and the sensor can be used as a data acquisition module for collecting image, audio and the like data in the present application.
处理器180是终端100的控制中心,利用各种数据接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器120内的软件程序和/或模块,以及调用存储在存储器120内的数据,执行终端100的各种功能和处理数据,从而对终端进行整体监控。如,处 理器通过数据接口获取图像采集单元采集到的图像数据,同时,该数据接口还可以获取音频电路采集到的音频信号以及传感器采集到的感应信号,并将音频信号以及感应信号传输给处理器。The processor 180 is the control center of the terminal 100, connecting various portions of the entire handset with various data interfaces and lines, by running or executing software programs and/or modules stored in the memory 120, and recalling data stored in the memory 120. The various functions and processing data of the terminal 100 are executed to perform overall monitoring of the terminal. Such as The processor obtains the image data collected by the image acquisition unit through the data interface. At the same time, the data interface can also acquire the audio signal collected by the audio circuit and the sensing signal collected by the sensor, and transmit the audio signal and the sensing signal to the processor.
在本申请实施例中,该处理器可以包括:中央处理器(Central Processing Unit,CPU),还可以包括:图像处理器(Graphics Processing Unit,GPU)。In this embodiment, the processor may include a central processing unit (CPU), and may further include: a graphics processing unit (GPU).
在一种可能的情况中,该处理器180至少可以用于:从数据接口获取终端当前所处环境的环境数据,该环境数据至少包括图像采集装置采集到的环境图像;在该环境图像为二维图像的情况下,将环境数据中的环境图像由二维图像转换为三维图像,得到由该环境数据转换出的虚拟现实数据,该虚拟现实数据用于反映终端所处环境的三维场景;并通过该通信模块将该虚拟现实数据传输给至少一个用于接收终端,该接收终端为用于接收该虚拟现实数据的终端。In a possible case, the processor 180 is configured to: obtain, from the data interface, environment data of a current environment of the terminal, where the environment data includes at least an environment image collected by the image collection device; In the case of a dimensional image, the environmental image in the environmental data is converted from a two-dimensional image into a three-dimensional image, and virtual reality data converted from the environmental data is obtained, and the virtual reality data is used to reflect a three-dimensional scene of the environment in which the terminal is located; The virtual reality data is transmitted to the at least one receiving terminal by the communication module, and the receiving terminal is a terminal for receiving the virtual reality data.
具体的,该处理器可用于实现下述图2以及图3所示实施例提供的共享虚拟现实数据的方法中第一终端中处理器所执行的相关操作。Specifically, the processor can be used to implement related operations performed by the processor in the first terminal in the method for sharing virtual reality data provided by the embodiment shown in FIG. 2 and FIG. 3 described below.
终端100还包括给各个部件供电的电源(比如电池),在一种可能的情况中,电源可以通过电源管理系统与处理器180逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗等功能。The terminal 100 also includes a power source (such as a battery) for powering various components. In one possible case, the power source can be logically connected to the processor 180 through the power management system to manage charging, discharging, and power consumption through the power management system. And other functions.
尽管未示出,终端100还可以包括射频模块、蓝牙模块等,在此不再赘述。Although not shown, the terminal 100 may further include a radio frequency module, a Bluetooth module, and the like, and details are not described herein again.
下面结合图1,对本申请实施例提供的一种共享虚拟现实数据的方法的方案进行说明。A scheme for sharing a virtual reality data according to an embodiment of the present application is described below with reference to FIG.
首先,以第一终端向第二终端共享该第一终端所处环境对应的虚拟现实数据为例进行介绍。其中,该第一终端可以理解为需要发送或者共享虚拟现实数据的发送终端;而该第二终端为接收虚拟现实数据的接收终端。First, the virtual terminal data corresponding to the environment in which the first terminal is located is shared by the first terminal to the second terminal as an example. The first terminal may be understood as a transmitting terminal that needs to send or share virtual reality data; and the second terminal is a receiving terminal that receives virtual reality data.
如图2,其示出了本申请一种共享虚拟现实数据的方法一个实施例的流程示意图,本实施例的方法可以包括:FIG. 2 is a schematic flowchart diagram of an embodiment of a method for sharing virtual reality data according to the present application. The method in this embodiment may include:
S201,第一终端采集该第一终端所处环境的视频数据;S201. The first terminal collects video data of an environment in which the first terminal is located.
可以理解的是,视频数据可以由连续的多帧图像组成,为了便于区分,将采集到的该第一终端所处环境的各帧图像称为环境图像。It can be understood that the video data may be composed of consecutive multi-frame images. For the purpose of distinguishing, each frame image of the environment in which the first terminal is located is referred to as an environment image.
如,可以通过第一终端的图像采集装置,例如摄像头或者第一终端外接的摄像头,采集该第一终端当前所处环境的图像。For example, an image of the environment in which the first terminal is currently located may be collected by an image capturing device of the first terminal, such as a camera or an external camera connected to the first terminal.
可以理解的是,本申请实施例中该图像采集装置可以为采集二维(Two Dimensions,2D)图像的摄像头等图像采集装置,相应的,采集到的视频数据可以为包含多帧连续的2D图像。其中,2D图像也可以称为平面图像,是指具有两个维度的图像,如,可以通过(X,Y)这两个维度表示的图像。It can be understood that the image capturing device in the embodiment of the present application may be an image capturing device such as a camera that collects two-dimensional (2D) images. Correspondingly, the collected video data may be a 2D image including multiple frames. . The 2D image may also be referred to as a planar image, and refers to an image having two dimensions, such as an image that can be represented by two dimensions (X, Y).
该图像采集装置也可以是采集三维(Three Dimensions,3D)图像的摄像头等图像采集装置,其中,3D图像也可以称为立体图像,是指具有三个维度的图像,如,通过(X,Y,Z)三个维度表示的图像。如,图像采集装置可以为VR摄像头等,相应的,采集到的视频数据可以为包含多帧连续的3D图像。 The image capturing device may also be an image capturing device such as a camera that collects three-dimensional (3D) images. The 3D image may also be referred to as a stereoscopic image, and refers to an image having three dimensions, for example, by (X, Y). , Z) An image represented by three dimensions. For example, the image capturing device may be a VR camera or the like. Correspondingly, the collected video data may be a 3D image including multiple frames.
特别的,为了能够更准确的反映出第一终端所处环境的特征(例如第一终端当前的时间、地点、天气等),该第一终端的图像采集装置采集该第一终端所处环境的视频数据的同时,还可以通过音频电路采集第一终端所处环境的音频信号,如,通过音频电路相连的麦克风等音频采集装置采集该第一终端所处环境的音频信号,以使得音频电路可以采集到音频信号。In particular, in order to more accurately reflect the characteristics of the environment in which the first terminal is located (eg, the current time, location, weather, etc. of the first terminal), the image acquisition device of the first terminal collects the environment in which the first terminal is located. At the same time of the video data, the audio signal of the environment in which the first terminal is located may be collected through an audio circuit, for example, an audio signal such as a microphone connected to the audio circuit collects an audio signal of an environment in which the first terminal is located, so that the audio circuit can The audio signal is collected.
为了使得第二终端侧的用户能够了解到第一终端侧的用户在所处环境中的观看视角等信息,在第一终端采集该视频数据的同时,还可以通过该第一终端的传感器感应第一终端侧用户的用户姿态的用户姿态数据,该用户姿态可以包括用户观看当前环境的观看视角。如,该用户姿态可以为用户的头部仰角等等。In order to enable the user on the second terminal side to learn information such as the viewing angle of the user on the first terminal side in the environment, the first terminal may also acquire the video data through the first terminal. User gesture data of a user gesture of a terminal side user, the user gesture may include a viewing angle of the user viewing the current environment. For example, the user gesture can be the user's head elevation angle and the like.
其中,该视频图像、音频数据可以构成该第一终端侧所处环境的环境数据。The video image and the audio data may constitute environment data of an environment where the first terminal side is located.
S202,第一终端检测当前采集到的环境图像是否属于3D图像,如果是,则将该环境图像作为第一终端当前所处环境的虚拟现实数据,并执行步骤S206;如果否,则执行步骤S203;S202, the first terminal detects whether the currently collected environment image belongs to the 3D image, and if yes, the environment image is used as the virtual reality data of the environment where the first terminal is currently located, and performs step S206; if not, step S203 is performed. ;
如,第一终端的处理器可以检测到该环境图像中是否包含深度信息,如果不包含深度信息,则说明该环境图像为2D图像;如果该环境图像包含深度信息,则该环境图像属于3D图像。For example, the processor of the first terminal may detect whether the environment image includes depth information, if the depth information is not included, the environment image is a 2D image; if the environment image includes depth information, the environment image belongs to a 3D image. .
其中,2D图像与3D图像的差异,就是2D图像缺少深度信息,如,在摄像机坐标系中,以垂直成像平面并穿过镜面中心的直线为Z轴,如果被摄像的物体在摄像机坐标系的坐标为(X,Y,Z),那么,其中,物体在该Z轴的值就是该物体在该摄像机成像平面的深度信息,该物体的2D图像中不包含该深度信息,而该物体的3D图像中包含该深度信息。The difference between the 2D image and the 3D image is that the 2D image lacks depth information. For example, in the camera coordinate system, the vertical imaging plane and the straight line passing through the center of the mirror are the Z axis, if the imaged object is in the camera coordinate system. The coordinate is (X, Y, Z), then, wherein the value of the object in the Z axis is the depth information of the object in the imaging plane of the camera, the depth information of the object is not included in the 2D image of the object, and the 3D of the object This depth information is included in the image.
可以理解的是,第一终端每个时刻采集一帧环境图像,第一终端每采集到一帧环境图像,均需要检测该环境图像是否属于3D图像,并在该环境图像不属于3D图像的情况下,执行将2D的环境图像转换为3D图像的操作。当然,考虑到第一终端在采集该第一终端侧所处环境的视频图像的过程中,一般不会进行摄像头等图像采集装置的切换,因此,也可以仅仅通过对视频数据中的第一帧环境图像进行检测,并根据该环境图像的维度,来确定视频数据所包含的各帧环境图像的维度。It can be understood that the first terminal collects one frame of the environment image at a time, and each time the first terminal collects one frame of the environment image, it is required to detect whether the environment image belongs to the 3D image, and the environment image does not belong to the 3D image. Next, an operation of converting a 2D environment image into a 3D image is performed. Of course, in the process of collecting the video image of the environment in which the first terminal is located, the image capturing device such as the camera is generally not switched, and therefore, only the first frame in the video data may be used. The environment image is detected, and the dimensions of each frame environment image included in the video data are determined according to the dimension of the environment image.
需要说明的是,本申请仅仅是以环境数据包含环境图像为例进行介绍,而由于采集到的环境数据除了包含环境图像之外,还可能会包含音频信号等数据,因此,如果环境数据中的环境图像为3D图像时,可以将环境数据作为该第一终端所处环境的虚拟现实数据。It should be noted that the present application only introduces the environmental data including the environment image as an example, and the collected environmental data may include audio signals and the like in addition to the environment image, and therefore, if the environmental data is included in the environment data, When the environment image is a 3D image, the environment data may be used as virtual reality data of the environment in which the first terminal is located.
S203,在该环境图像为2D图像的情况下,第一终端确定将该环境图像从2D图像转换为3D图像所需的深度信息;S203. In a case where the environment image is a 2D image, the first terminal determines depth information required to convert the environment image from the 2D image to the 3D image;
如,第一终端的处理器构建该环境图像的深度信息,以便后续依据为环境图像构建的深度信息,将环境图像由2D图像转换为3D图像。第一终端为2D的环境图像构建深度信息的方式可以有多种,如基于几何分析的方式,基于特征学习的方法,采用相对深度或者深度排序的方法构建深度信息,本申请对于采用何种方式构建将该2D的环境图像转换为3D图像所需的深度信息并不限制。For example, the processor of the first terminal constructs depth information of the environment image to subsequently convert the environment image from the 2D image to the 3D image according to the depth information constructed for the environment image. The first terminal can construct depth information for the 2D environment image. For example, based on the geometric analysis method, the feature learning method is used to construct the depth information by using the method of relative depth or depth sorting. The depth information required to construct the 2D environment image into a 3D image is not limited.
特别的,为了优化深度信息,如果该环境图像中存在处于运动状态的运动对象,则可以结合运动对象的运动信息、该环境图像以及处于该环境图像之前的最近一帧环境图像, 对该环境图像对应的深度信息进行优化处理。In particular, in order to optimize the depth information, if there is a moving object in the environment image, the motion information of the motion object, the environment image, and the latest frame environment image before the environment image may be combined. The depth information corresponding to the environment image is optimized.
其中,分析该环境图像中是否存在运动对象可以通过对该环境图像以及该环境图像相邻的一帧或多帧环境图像进行分析来确定,例如,对相邻帧环境图像进行图像识别,确定其中的被摄物体是否有位置偏移。The analyzing whether the moving object exists in the environment image may be determined by analyzing the environment image and the one or more frame environment images adjacent to the environment image, for example, performing image recognition on the adjacent frame environment image, and determining Whether the subject has a positional offset.
S204,第一终端依据为2D环境图像构建的深度信息以及该2D的环境图像,构建出该2D的环境图像对应的3D环境图像;S204: The first terminal constructs a 3D environment image corresponding to the 2D environment image according to the depth information constructed for the 2D environment image and the 2D environment image.
其中,为了便于区分,将由环境图像构建出的3D图像称为3D环境图像。Among them, for convenience of distinction, a 3D image constructed from an environment image is referred to as a 3D environment image.
如,第一终端的处理器将该深度信息与该环境图像进行合成,从将2D的环境图像转换为3D环境图像。For example, the processor of the first terminal synthesizes the depth information with the environment image, and converts the 2D environment image into a 3D environment image.
其中,由环境图像构建出的该3D环境图像实际上反映的是采集到环境图像时,该第一终端所处环境的三维场景,因此,该3D环境图像实际上就是第一终端侧当前所处环境对应的虚拟现实数据。其中,该虚拟现实数据可以反映该第一终端所处环境的三维立体场景。The 3D environment image constructed by the environment image actually reflects the three-dimensional scene of the environment in which the first terminal is located when the environment image is collected. Therefore, the 3D environment image is actually the first terminal side currently located. Virtual reality data corresponding to the environment. The virtual reality data may reflect a three-dimensional scene of the environment in which the first terminal is located.
可以理解的是,以上是直接以对环境图像的处理为例进行说明,但是可以理解的是,如果第一终端的采集到的是包含环境图像的环境数据,则可以将环境数据中2D的环境图像转换为3D的环境图像,便可以得到由该环境数据转换出的虚拟现实数据。It can be understood that the above is directly taking the processing of the environment image as an example, but it can be understood that if the first terminal collects the environmental data including the environment image, the environment of the environment can be 2D. The image is converted into a 3D environment image, and the virtual reality data converted from the environment data can be obtained.
需要说明的是,步骤S203和S204为将环境图像从2D图像转换为3D图像的一种实现方式,如果通过其他方式将该环境图像从2D转换为3D图像也同样适用于本申请实施例,在此不再赘述。It should be noted that steps S203 and S204 are an implementation manner of converting an environment image from a 2D image to a 3D image. If the environment image is converted from 2D to 3D image by other means, the same applies to the embodiment of the present application. This will not be repeated here.
S205,第一终端对构建出的3D环境图像进行空洞填补。S205. The first terminal performs hole filling on the constructed 3D environment image.
其中,空洞填补也称空洞填充,空洞填补所利用的基本思想是空间相关性,即,利用图像中空洞周围的邻域像素来估计该孔洞中像素的深度值,从而依据估计出的深度值对空洞进行填充。Among them, the cavity filling is also called void filling. The basic idea used for cavity filling is spatial correlation, that is, using the neighborhood pixels around the image hollow hole to estimate the depth value of the pixel in the hole, so that the estimated depth value is based on The holes are filled.
为了提高渲染出的3D环境图像所呈现出的3D效果,第一终端的处理器还可以对转换出的3D的环境图像进行空洞填补等优化处理。当然,该步骤仅仅是为了优化图像的3D效果,在对3D效果要求不高的场景中,也可以不执行该步骤S205。In order to improve the 3D effect exhibited by the rendered 3D environment image, the processor of the first terminal may perform optimization processing such as hole filling on the converted 3D environment image. Of course, this step is only for optimizing the 3D effect of the image. In the scenario where the 3D effect is not high, the step S205 may not be performed.
S206,第一终端对3D环境图像进行VR场景渲染,得到渲染后的3D环境图像;S206. The first terminal performs VR scene rendering on the 3D environment image to obtain a rendered 3D environment image.
处理器对构建出的3D环境图像或者是采集到的3D的环境图像进行VR场景渲染的目的是为了避免后续第二终端展现该3D环境图像时,出现失真等异常情况。The purpose of the VR scene rendering by the processor on the constructed 3D environment image or the acquired 3D environment image is to avoid an abnormal situation such as distortion when the second terminal displays the 3D environment image.
为了便于理解图像所可能出现的失真,以几种失真情况,以及为了抵消失真所进行的VR场景渲染为例进行介绍:In order to facilitate understanding of the distortion that may occur in an image, an example of several distortion cases and VR scene rendering to cancel the distortion is introduced:
如,由于终端的显示屏上的图像在透过透镜放大时,图像会被畸变,为了抵消这个畸变,就需要对图像进行拉伸扭曲,这样,终端显示出的图像投射到用户的视网膜上就是没有变形的图像。其中,为了抵消终端显示屏输出的图像的畸变所进行的处理就是反畸变,也称为反向畸变,反畸变就是VR场景渲染的一种处理方式。For example, since the image on the display screen of the terminal is magnified through the lens, the image needs to be distorted to offset the distortion, so that the image displayed by the terminal is projected onto the user's retina. No distortion of the image. Among them, the processing for canceling the distortion of the image outputted by the terminal display screen is anti-distortion, also called reverse distortion, and the anti-distortion is a processing method of VR scene rendering.
又如,当一束白光穿过三棱镜,从三棱镜射出的是一道彩虹,那是因为不同颜色的光线折射率不同,而导致的色散现象。相应的,终端在输出图像的过程中,同样会出现该种色散现象。为了避免由于色散现象,而导致图像失真,可以利用光路可逆的原理进行反色 散,如,既然从图像投射出的光线经过透镜的时候会发生色散,那么就可以在图像的光线进入透镜之前,先将图像做一次色散,这样经过透镜呈现出的图像就是正常图像了。Another example is that when a white light passes through the prism, a rainbow is emitted from the prism, which is caused by the difference in refractive index of different colors of light. Correspondingly, the terminal will also exhibit this kind of dispersion phenomenon during the process of outputting images. In order to avoid image distortion due to dispersion phenomenon, the color path reversible principle can be used to reverse the color. For example, since the light projected from the image passes through the lens, dispersion occurs. Then, before the light of the image enters the lens, the image is dispersed once, so that the image presented through the lens is a normal image.
又如,正常人的双眼注视同一物体时,物体分别在两眼视网膜处成像,并在大脑视中枢重叠起来,成为一个完整的、具有立体感的单一物体,这个功能叫双眼单视。而用于观看VR图像的VR眼镜的原理和我们的眼睛类似,目前的VR眼镜一般都是将图像内容分屏,切成两半,通过镜片实现叠加成像,这时往往会导致人眼瞳孔中心、透镜中心、屏幕(分屏后)中心不在一条直线上,使得视觉效果很差,出现不清晰、变形等一大堆问题。而理想的状态是,人眼瞳孔中心、透镜中心、屏幕(分屏后)中心应该在一条直线上,这时就需要通过调节透镜的“瞳距”使之与人眼瞳距重合,并调节屏幕的画面中心,保证3点处于一条直线上,从而获得最佳的视觉效果。而为了实现能够实现将人眼瞳孔中心、透镜中心、屏幕(分屏后)中心处于一条直线上,而对3D环境图像所处的处理称为瞳距调节。For example, when the eyes of normal people look at the same object, the objects are imaged at the retinas of the two eyes, and are superimposed on the center of the brain to become a complete, three-dimensional single object. This function is called binocular single vision. The principle of VR glasses used to view VR images is similar to our eyes. Currently, VR glasses generally split the image content into two halves, and superimpose the image through the lens, which often leads to the pupil center of the human eye. The center of the lens and the center of the screen (after the split screen) are not in a straight line, which makes the visual effect very poor, and there are a lot of problems such as unclearness and deformation. The ideal state is that the center of the pupil of the eye, the center of the lens, and the center of the screen (after the split screen) should be in a straight line. In this case, it is necessary to adjust the "pitch" of the lens to coincide with the eyelid of the human eye, and adjust At the center of the screen, make sure that 3 points are in a straight line for the best visual effect. In order to realize that the center of the pupil of the human eye, the center of the lens, and the center of the screen (after the split screen) are in a straight line, the processing on the 3D environment image is called the pitch adjustment.
在本申请实施例中,VR场景渲染可以包括:对该3D环境图像进行反向畸变、反色散以及瞳距调节等一种或多种处理。可见,如果采集到的环境数据(为了便于区分,可以称为第一环境数据)不是3D数据,该第一终端会调整该环境数据的参数信息(如,增加深度信息,进行空洞填补等等),以将环境数据转换为3D的环境数据(即,第二环境数据),并通过对该3D的环境数据进行VR场景渲染,生成经过VR场景渲染后的3D环境数据(可以称为第三环境数据或第三数据)。如果数据采集模块所采集到的环境数据(即第一环境数据)是3D数据,则该第一终端只需对该环境数据进行VR场景渲染,生成第四数据。In the embodiment of the present application, the VR scene rendering may include: performing one or more kinds of processing such as reverse distortion, inverse dispersion, and distance adjustment on the 3D environment image. It can be seen that if the collected environmental data (which may be referred to as the first environment data for convenience of distinction) is not 3D data, the first terminal adjusts parameter information of the environment data (eg, adding depth information, performing hole filling, etc.) The environment data is converted into 3D environment data (ie, the second environment data), and the VR scene rendering is performed on the 3D environment data to generate 3D environment data after being rendered by the VR scene (may be referred to as a third environment) Data or third data). If the environment data collected by the data collection module (ie, the first environment data) is 3D data, the first terminal only needs to perform VR scene rendering on the environment data to generate fourth data.
可以理解的是,如果在步骤S206之前获取到是的包含3D环境图像在内的虚拟现实数据,则可以直接对该虚拟现实数据中的三维的环境图像进行VR场景渲染即可。It can be understood that if the virtual reality data including the 3D environment image is acquired before step S206, the VR scene rendering may be directly performed on the three-dimensional environment image in the virtual reality data.
S207,第一终端依据当前该第一终端与第二终端之间的网络状态,并确定该网络状态所适用的编码方式;S207. The first terminal determines, according to a current network state between the first terminal and the second terminal, a coding mode applicable to the network state.
网络状态可以反映第一终端与第二终端之间数据传输效果,如,网络状态可以包括网络速度、网络信号质量等。The network status may reflect the data transmission effect between the first terminal and the second terminal. For example, the network status may include network speed, network signal quality, and the like.
其中,该编码方式可以分为两大类,一种为有损编码方式,一种为无损编码方式。当然,每类编码方式具体又可以分为多种编码模式,如,有损编码方式可以包括:音视频交错格式(Aud io Video I nter leaved,AVI)的编码,动态图像专家组4(Moving Picture Experts Group 4,MPEG4)的编码方式等。无损编码方式可以包括:香农编码、哈夫曼(Huffman)编码、行程编码(Run Length Code,RLC)等。The coding mode can be divided into two categories, one is lossy coding mode and the other is lossless coding mode. Of course, each type of coding method can be further divided into multiple coding modes. For example, the lossy coding method can include: audio and video interlaced format (AVI) encoding, dynamic image expert group 4 (Moving Picture) The encoding method of Experts Group 4, MPEG4). The lossless coding method may include: Shannon coding, Huffman coding, Run Length Code (RLC), and the like.
在本申请实施例中,根据网络速度可以确定所需采用哪类编码方式,进而采用该类编码方式所包含的一种编码模式进行编码。In the embodiment of the present application, according to the network speed, it can be determined which type of coding mode is required, and then an encoding mode included in the coding mode is used for encoding.
举例说明,当网络状态较好,如,网络状态表明该第一终端与第二终端的网络速度较高时,则第一终端的处理器可以采用无损编码方式作为所需的编码方式,由于无损编码方式编码出的图像质量较高,从而在不影响图像数据传输速度的情况下,向第二终端传输具有较高图像质量的图像数据。当然,在网络状态较好的情况下,具体采用哪种无损编码模式,则可以根据需要设定。而网络状态较差,如,网络状态表明该第一终端与第二终端的网络传输速度较低,第一终端可以选取有损编码方式作为对3D图像编码的编码方式,以尽 可能降低传输压缩后的数据所需的耗时。相应的,在网络状态较差的情况下,具体采用哪种有损编码模式,也可以根据需要设定。For example, when the network status is good, for example, the network status indicates that the network speed of the first terminal and the second terminal is high, the processor of the first terminal may use the lossless coding mode as the required coding mode, The image quality encoded by the encoding method is high, so that image data with higher image quality is transmitted to the second terminal without affecting the image data transmission speed. Of course, in the case of a good network state, which lossless coding mode is specifically adopted, it can be set as needed. The network status is poor. For example, the network status indicates that the network transmission speed of the first terminal and the second terminal is low, and the first terminal can select the lossy coding mode as the coding mode for encoding the 3D image. It may reduce the time required to transfer compressed data. Correspondingly, in the case of a poor network state, which lossy coding mode is specifically adopted, it can also be set as needed.
可以理解的是,为了第一终端能够像第二终端传输该环境数据,第一终端与第二终端之间可以是预先建立通讯连接,如第一终端与第二终端之间可以建立有即时通讯通道,以通过第一终端与第二终端之间的即时通讯通道传输该经过编码后的3D环境图像。在该种情况下,可以确定第一终端与第二终端之间建立的通讯连接通道的网络状态。It can be understood that, in order to enable the first terminal to transmit the environment data, the first terminal and the second terminal may be pre-established communication connections, for example, an instant communication may be established between the first terminal and the second terminal. The channel transmits the encoded 3D environment image through an instant messaging channel between the first terminal and the second terminal. In this case, the network status of the communication connection channel established between the first terminal and the second terminal can be determined.
当然,第一终端也可以通过邮件等形式将3D环境图像传输给第二终端,对于第一终端向第二终端传输该3D环境图像所采用的具体通信方式不加以限制。而在第一终端向第二终端传输数据之前,如果第一终端与第二终端为建立网络连接,那么可以仅仅确定第一终端的网络状态,并依据该第一终端的网络状态,确定编码方式。Certainly, the first terminal may also transmit the 3D environment image to the second terminal by using a mail or the like, and the specific communication manner adopted by the first terminal to transmit the 3D environment image to the second terminal is not limited. Before the first terminal and the second terminal establish a network connection, the first terminal and the second terminal may determine the network state of the first terminal, and determine the coding mode according to the network state of the first terminal. .
需要说明的是,依据网络状态确定编码方式仅仅是一种实现方式,在实际应用中,也可以预先设定所需的编码方式,或者由用户选择所需的编码方式,在此不加以限制。It should be noted that determining the coding mode according to the network state is only an implementation manner. In an actual application, the required coding mode may be preset, or the user may select a desired coding mode, which is not limited herein.
S208,第一终端依据确定出的编码方式,对经过VR场景渲染的3D环境图像进行编码,得到经过编码的3D环境数据。S208. The first terminal encodes the 3D environment image rendered by the VR scene according to the determined encoding manner, to obtain encoded 3D environment data.
可以理解的是,在采集环境图像的同时,如果第一终端采集到音频数据以及用户观看视角等数据,则,第一终端会将该经过VR场景渲染的3D环境数据和音频数据在内的环境数据,以及用户观看视角等数据一并进行编码,以便后续将编码后的环境数据以及第一终端侧用户的观看视角一起发送给第二终端。It can be understood that, while collecting the environment image, if the first terminal collects audio data and the user views the viewing angle and the like, the first terminal may use the 3D environment data and the audio data rendered by the VR scene. The data, and the data such as the viewing angle of the user are encoded together to transmit the encoded environment data and the viewing angle of the first terminal side user to the second terminal.
S209,第一终端将经过编码的3D环境图像发送给第二终端。S209. The first terminal sends the encoded 3D environment image to the second terminal.
如,第一终端的处理器将经过编码后的3D环境图像发送给第一终端的通信模块,并通过该通信模块将该经过编码后的3D环境图像传输给第二终端。For example, the processor of the first terminal sends the encoded 3D environment image to the communication module of the first terminal, and transmits the encoded 3D environment image to the second terminal through the communication module.
其中,第一终端向第一终端传输该经过编码的3D环境图像所采用的网络可以有多种可能,如可以通过有线网络,或者是蓝牙、wifi等无线网络向该第二终端传输给3D环境图像。具体的,传输该经过编码的3D环境图像所采用的传输协议,与传输该3D环境图像所采用的传输方式有关。The network used by the first terminal to transmit the encoded 3D environment image to the first terminal may have multiple possibilities, such as being transmitted to the 3D environment to the second terminal through a wired network or a wireless network such as Bluetooth or wifi. image. Specifically, the transmission protocol used to transmit the encoded 3D environment image is related to the transmission mode used to transmit the 3D environment image.
可以理解的是,由于步骤S201会持续采集环境图像,因此,在实际应用中可以第一终端会不断重复以上步骤S202至S209,直至所有将采集到视频图像中的所有环境图像均进行处理后发送给第二终端。It can be understood that, since the environment image is continuously collected in step S201, in the actual application, the first terminal may repeat the above steps S202 to S209 until all the environmental images in the captured video image are processed and then sent. Give the second terminal.
在本申请实施例中,是以获取到第一终端当前所处环境的环境图像时,如果当前获取到的该环境图像为二维图像,则实时该环境图像转换为虚拟现实数据,并将当前转换出的虚拟现实数据实时传输给第二终端为例进行说明。In the embodiment of the present application, when the environment image of the current environment of the first terminal is obtained, if the currently acquired environment image is a two-dimensional image, the environment image is converted into virtual reality data in real time, and the current The converted virtual reality data is transmitted to the second terminal in real time as an example for description.
但是可以理解的是,在不需要实时共享第一终端侧所处环境的场景的情况中,也可以是在第一终端结束对第一终端所处环境的环境图像的采集之后,再依次将采集到的环境图像转换为虚拟现实数据,并对虚拟现实数据进行VR场景渲染;然后,再依次对经过VR渲染后的各个3D图像进行编码并传输;或者是将所有的经过VR场景渲染后的3D图像进行统一编码之后,再一起发送给第二终端。However, it can be understood that, in the case that the scenario of the environment in which the first terminal is located is not required to be shared in real time, after the first terminal ends the collection of the environment image of the environment in which the first terminal is located, the collection is sequentially performed. The obtained environment image is converted into virtual reality data, and the VR scene is rendered by the virtual reality data; then, each 3D image after VR rendering is encoded and transmitted in turn; or all 3D after rendering through the VR scene is performed. After the image is uniformly encoded, it is sent to the second terminal together.
需要说明的是,以上是以该第一终端连续摄取多帧环境图像为例进行介绍,但是可以 理解的是,如果第一终端仅仅采集到一帧环境图像,同样可以采用本申请实施例的方式进行处理,其过程相似,在此不再赘述。It should be noted that the above is an example in which the first terminal continuously ingests a multi-frame environment image, but may It is understood that if the first terminal only collects one frame of the environment image, the method can be processed in the manner of the embodiment of the present application, and the process is similar, and details are not described herein again.
特别的,在将环境数据中的环境图像转换为3D图像,并对转换出的3D图像进行VR场景渲染的情况下,环境数据中包含了经过VR场景渲染后的3D环境数据,则可以直接对环境数据进行编码,以便后续将编码后的环境数据传输给第二终端。In particular, in the case of converting the environmental image in the environmental data into a 3D image and performing VR scene rendering on the converted 3D image, the environment data includes the 3D environment data after being rendered by the VR scene, and then directly The environmental data is encoded to subsequently transmit the encoded environmental data to the second terminal.
本实施例是以第一终端将环境图像对应的虚拟现实数据(3D环境图像或者包含3D环境图像的视频数据)发送给一个第二终端为例进行介绍,但是可以理解的是,第一终端可以根据需要向多个第二终端发送该虚拟现实数据,其具体过程相似,在此不再赘述。In this embodiment, the first terminal sends the virtual reality data (the 3D environment image or the video data including the 3D environment image) corresponding to the environment image to a second terminal as an example, but it can be understood that the first terminal can The virtual reality data is sent to the multiple second terminals as needed, and the specific process is similar, and details are not described herein again.
S210,第二终端在接收到经过编码的3D环境图像的情况下,对该经过编码的3D环境数据解码,得到该3D环境图像。S210. The second terminal, after receiving the encoded 3D environment image, decodes the encoded 3D environment data to obtain the 3D environment image.
如,第二终端通过其通信模块接收第一终端传输的该经过编码的3D环境图像,第二终端的处理器(处理器可以调用数据解码模块)将通信模块接收到的该经过编码的3D环境图像进行解码,以解码出该3D环境图像。For example, the second terminal receives the encoded 3D environment image transmitted by the first terminal through its communication module, and the processor (the processor may call the data decoding module) of the second terminal receives the encoded 3D environment received by the communication module. The image is decoded to decode the 3D environment image.
在第一终端向第二终端发送的为经过编码的环境数据的情况下,则可以对环境数据进行解码,以解码出包含由该3D环境图像的环境数据。In the case that the first terminal transmits the encoded environment data to the second terminal, the environment data may be decoded to decode the environment data including the image of the 3D environment.
S211,第二终端获取当前观看该3D环境图像的指定观看视角。S211. The second terminal acquires a specified viewing angle of view of the current 3D environment image.
S212,第二终端按照该指定观看,对该3D环境图像进行渲染,得到以该指定观看视角所呈现出的3D环境图像。S212. The second terminal renders the 3D environment image according to the specified viewing, and obtains a 3D environment image presented by the specified viewing angle.
可以理解的是,第二终端解码出的图像为一个3D图像,而第二终端向显示屏输出的图像可以为任意视角下所观看到的图像,因此,第二终端的处理器可以先确定一个指定视角,并按照该指定视角,对该3D环境图像进行渲染,得到以该指定视角呈现出的3D环境图像。It can be understood that the image decoded by the second terminal is a 3D image, and the image output by the second terminal to the display screen can be an image viewed from any angle of view. Therefore, the processor of the second terminal can first determine one. The viewing angle is specified, and the 3D environment image is rendered according to the specified viewing angle to obtain a 3D environment image presented at the specified viewing angle.
其中,该指定视角可以为预先设定的默认视角,也可以是由第二终端侧的用户预先选择或者实时选择的观看视角。The specified viewing angle may be a preset default viewing angle, or may be a viewing angle selected by a user on the second terminal side or selected in real time.
可以理解的是,在第二终端接收到的经过编码的3D环境数据的同时,如果第二终端的通信模块接收到经过编码的用户姿态数据,则该第二终端的处理器可以解码出用户姿态数据。相应的,该处理器可以根据该用户姿态数据确定第一终端侧的用户的观看视角,从而可以将该第一终端侧的用户的观看视角作为默认的观看视角。这样,在第二终端侧的用户不调整观看视角的情况下,第二终端可以按照该第一终端侧用户的观看视角,对该3D环境图像进行渲染,从而使得第二终端可以感受到第一终端侧用户观看到的环境场景。It can be understood that, while the encoded 3D environment data is received by the second terminal, if the communication module of the second terminal receives the encoded user gesture data, the processor of the second terminal can decode the user gesture. data. Correspondingly, the processor can determine the viewing angle of the user on the first terminal side according to the user posture data, so that the viewing angle of the user on the first terminal side can be used as the default viewing angle. In this way, in a case where the user on the second terminal side does not adjust the viewing angle, the second terminal may render the 3D environment image according to the viewing angle of the first terminal side user, so that the second terminal can feel the first The environment scene viewed by the user on the terminal side.
S213,第二终端将以该指定观看视角所呈现出的3D环境图像输出到显示屏。S213. The second terminal outputs the 3D environment image presented by the specified viewing angle to the display screen.
如,第二终端的处理器将按照指定观看视角渲染后的3D环境图像传输给第二终端的显示单元,以通过该显示单元输出该3D环境图像。For example, the processor of the second terminal transmits the 3D environment image rendered according to the specified viewing angle to the display unit of the second terminal to output the 3D environment image through the display unit.
在第二终端输出3D环境图像之后,第二终端的用户可以通过虚拟现实设备来观看该第二终端侧所输出的3D环境图像,从而可以采取第一终端的用户相关的观看视角观看该3D环境图像;也可以按照自己的需求没从任意视角观看该3D图像,从而可以体验到如同亲临到第一终端侧用户所处环境的视觉感受。After the second terminal outputs the 3D environment image, the user of the second terminal can view the 3D environment image output by the second terminal side through the virtual reality device, so that the 3D environment can be viewed by the user-related viewing angle of the first terminal. The image; the 3D image can also be viewed from any angle according to its own needs, so that the visual experience of the environment where the user is located on the first terminal side can be experienced.
在本申请实施例中,第一终端可以获取第一终端所处环境的环境图像,并基于该环境 图像构建出用于反映该第一终端所处环境的三维场景的3D环境图像,并将3D环境图像发送给至少一个第二终端,这样,第二终端的用户通过观看该第二终端输出的该3D环境图像,可以感受到如同身处该第一终端所处的环境中的视觉体验,从而体验到与第一终端处于相同环境场景的体验效果。In the embodiment of the present application, the first terminal may acquire an environment image of an environment in which the first terminal is located, and based on the environment The image constructs a 3D environment image for reflecting a three-dimensional scene of the environment in which the first terminal is located, and transmits the 3D environment image to the at least one second terminal, such that the user of the second terminal outputs the The 3D environment image can feel the visual experience in the environment in which the first terminal is located, thereby experiencing the experience effect of being in the same environment scene as the first terminal.
可以理解的是,以上是以第一终端向第二终端共享该第一终端所处环境对应的虚拟现实数据为例进行介绍,在实际应用中,第一终端还可以将第一终端侧用户正在观看的视频图像作为需要共享的视频图像,并经过本申请的共享虚拟现实数据的方法进行处理后,分享给第二终端,以使得第二终端的用户可以体验到与第一终端的用户观看该视频图像相同的体验效果。当然,第一终端也可以将第一终端存储的视频图像作为待分享的数据,并利用本申请的共享虚拟现实数据的方法将该存储的视频图像处理为虚拟现实数据,并传输给第二终端。It can be understood that the above is an example in which the first terminal shares the virtual reality data corresponding to the environment in which the first terminal is located in the second terminal. In an actual application, the first terminal may also be the first terminal side user. The video image to be viewed is processed as a video image to be shared, and is processed by the method for sharing virtual reality data of the present application, and then shared with the second terminal, so that the user of the second terminal can experience the user watching with the first terminal. The same experience with video images. Certainly, the first terminal may also use the video image stored by the first terminal as data to be shared, and process the stored video image into virtual reality data by using the method for sharing virtual reality data of the present application, and transmit the data to the second terminal. .
以第一终端将第一终端正在播放的视频图像分享给第二终端为例进行介绍。The following describes an example in which the first terminal shares the video image being played by the first terminal to the second terminal.
如图3,其示出了本申请一种共享虚拟现实数据的方法又一种实施例的流程示意图,本实施例的方法可以包括:FIG. 3 is a schematic flowchart diagram of still another embodiment of a method for sharing virtual reality data according to the present application. The method in this embodiment may include:
S301,第一终端获取该第一终端当前播放的视频数据;S301. The first terminal acquires video data currently played by the first terminal.
该视频数据至少包括视频图像,还可以包括音频信号;The video data includes at least a video image, and may further include an audio signal;
该视频数据可以为一帧图像以及该帧图像所关联的音频信号,如,终端播放的视频文件中当前正在播放的一帧视频图像以及输出的音频信号所组成的视频数据。The video data may be a frame image and an audio signal associated with the frame image, such as video data composed of a frame of video image currently being played in the video file played by the terminal and the output audio signal.
当然,视频数据也可以是一个视频文件,在该视频文本中包括多帧视频图像以及该多帧视频文件关联的音频信号,在该种情况下,第一终端可以依次处理该视频文件中各帧视频图像,其处理过程与本实施例的过程相似,在此不再赘述。Of course, the video data may also be a video file, where the video text includes a multi-frame video image and an audio signal associated with the multi-frame video file. In this case, the first terminal may sequentially process each frame in the video file. The processing of the video image is similar to the process of this embodiment, and details are not described herein again.
在一种实现方式中,第一终端可以获取该第一终端展现出的目标播放窗口中当前播放的视频数据,该目标播放窗口可以为第一终端中指定应用的图像输出窗口。In an implementation manner, the first terminal may acquire video data currently playing in the target play window displayed by the first terminal, where the target play window may be an image output window of the specified application in the first terminal.
如,指定应用可以为游戏应用,而该目标播放窗口为游戏应用输出游戏画面的游戏窗口,在该种应用场景中,第一终端获取到的视频数据可以为游戏数据,该游戏数据包括游戏画面以及游戏中的声音信号等,通过本实施例的虚拟现实共享方法,可以实现将该游戏数据中的游戏画面转换为虚拟现实数据,以共享给第二终端的用户,以使得第二终端的用户可以观看到第一终端侧播放的游戏画面所对应的三维立体场景。For example, the specified application may be a game application, and the target play window is a game window for outputting a game screen for the game application. In the application scenario, the video data acquired by the first terminal may be game data, and the game data includes a game screen. And the sound signal in the game, etc., by the virtual reality sharing method of the embodiment, the game screen in the game data can be converted into virtual reality data for sharing to the user of the second terminal, so that the user of the second terminal The three-dimensional scene corresponding to the game screen played on the first terminal side can be viewed.
又如,该指定应用可以为播放器,则该目标播放窗口可以为播放器的视频播放窗口,在该种应用场景中,第一终端可以获取播放器的视频播放窗口所播放的视频图像,并通过本实施例的虚拟现实共享方法,实现将播放器所播放的视频图像转换为虚拟现实数据,并共享给第二终端的用户,以使得第二终端的用户可以感受到身处该第一终端侧的播放器播放的视频图像所对应的三维立体场景的体验。For another example, the specified application may be a player, and the target play window may be a video play window of the player. In the application scenario, the first terminal may acquire a video image played by a video play window of the player, and The virtual reality sharing method of the embodiment is implemented to convert the video image played by the player into virtual reality data, and share the data to the user of the second terminal, so that the user of the second terminal can feel the first terminal. The experience of the three-dimensional scene corresponding to the video image played by the player on the side.
当然,该指定应用还可以为浏览器等等具备图像输出功能的应用,在此不加以限制。Of course, the specified application can also be an application having an image output function such as a browser, and is not limited herein.
在一种可能的实现方式中,第一终端获取当前播放的视频图像的同时,还可以通过传感器等器件感应第一终端侧用户观看视频图像的观看视角,以便后续第二终端侧可以以相 同的观看视角观看该视频图像。In a possible implementation manner, the first terminal acquires a currently viewed video image, and may also sense, by using a device such as a sensor, a viewing angle of the video image viewed by the first terminal side user, so that the second terminal side can View the video image at the same viewing angle.
S302,第一终端检测该视频数据中的视频图像是否属于3D图像,如果是,则将该视频数据作为第一终端当前播放的虚拟现实数据,并执行步骤S306;如果否,则执行步骤S303;S302, the first terminal detects whether the video image in the video data belongs to the 3D image, and if so, the video data is used as the virtual reality data currently played by the first terminal, and step S306 is performed; if not, step S303 is performed;
可以理解的是,在视频数据包含多帧视频图像的情况下,均需要检测当前播放的视频图像是否属于3D图像,并在该视频图像不属于3D图像的情况下,执行将2D的视频图像转换为3D图像的操作。当然,考虑到一个视频文件内的视频图像的维度一般也是相同的,因此,也可以仅仅通过对视频数据中的第一帧视频图像进行检测,并根据该第一帧图像的维度,来确定视频数据所包含的各帧图像的维度。It can be understood that, in the case that the video data includes multiple frames of video images, it is required to detect whether the currently played video image belongs to a 3D image, and if the video image does not belong to the 3D image, perform conversion of the 2D video image. For the operation of 3D images. Of course, considering that the dimensions of the video images in a video file are generally the same, it is also possible to determine the video only by detecting the first frame of the video image in the video data and according to the dimensions of the first frame image. The dimensions of the image of each frame contained in the data.
S303,在视频图像为2D图像的情况下,第一终端确定将视频数据中该视频图像从2D图像转换为3D图像所需的深度信息;S303. In a case where the video image is a 2D image, the first terminal determines depth information required to convert the video image in the video data from the 2D image to the 3D image;
特别的,为了优化深度信息,如果该视频图像中存在处于运动状态的运动对象,则可以结合运动对象的运动信息以及视频图像中处于该视频图像之前的最近一帧图像,对该环境图像对应的深度信息进行优化处理。In particular, in order to optimize the depth information, if there is a moving object in the moving state in the video image, the motion information of the moving object and the latest frame image in front of the video image in the video image may be combined to correspond to the environment image. The depth information is optimized.
S304,第一终端依据视频图像对应的深度信息以及该视频图像,将该视频数据中的视频图像转换为3D视频图像,得到由视频数据转换出的虚拟现实数据;S304. The first terminal converts the video image in the video data into a 3D video image according to the depth information corresponding to the video image and the video image, to obtain virtual reality data converted by the video data.
其中,为了便于区分,将视频图像转换出的3D图像称为3D视频图像,而由于包含该3D环境图像的视频数据实际上是一个三维的视频数据,因此,转换后的视频数据可以称为虚拟现实数据,该虚拟现实数据用于反映该第一终端当前播放的视频图像的三维场景。In order to facilitate the distinction, the 3D image converted from the video image is referred to as a 3D video image, and since the video data including the 3D environment image is actually a three-dimensional video data, the converted video data may be referred to as virtual The real data is used to reflect a three-dimensional scene of the video image currently played by the first terminal.
S305,第一终端对虚拟现实数据中3D视频图像进行空洞填补和优化。S305. The first terminal performs hole filling and optimization on the 3D video image in the virtual reality data.
当然,该步骤仅仅是为了优化图像的3D效果,在对3D效果要求不高的场景中,也可以不执行该步骤S305。Of course, this step is only for optimizing the 3D effect of the image. In the scenario where the 3D effect is not high, the step S305 may not be performed.
S306,第一终端对虚拟现实数据中的3D视频图像进行VR场景渲染,得到渲染后的虚拟现实数据;S306. The first terminal performs VR scene rendering on the 3D video image in the virtual reality data to obtain the rendered virtual reality data.
如,VR场景渲染包括:对该3D环境图像进行反畸变、反色散以及瞳距调节等一种或多种处理。For example, the VR scene rendering includes: performing one or more kinds of processing such as anti-distortion, anti-dispersion, and interpupillary adjustment on the 3D environment image.
将视频图像转换为3D视频图像的过程与前面实施例中将环境图像转换为3D环境图像的过程相似,具体可以参见前面实施例的相关介绍,在此不再赘述。The process of converting the video image into the 3D video image is similar to the process of converting the environment image into the 3D environment image in the previous embodiment. For details, refer to the related description of the previous embodiment, and details are not described herein again.
S307,第一终端依据当前该第一终端与第二终端之间的网络状态,并确定该网络状态所适用的编码方式;S307. The first terminal determines, according to a current network state between the first terminal and the second terminal, a coding mode applicable to the network state.
需要说明的是,依据网络状态确定编码方式仅仅是一种实现方式,在实际应用中,也可以预先设定所需的编码方式,或者由用户选择所需的编码方式,在此不加以限制。It should be noted that determining the coding mode according to the network state is only an implementation manner. In an actual application, the required coding mode may be preset, or the user may select a desired coding mode, which is not limited herein.
S308,第一终端依据确定出的编码方式,对经过VR场景渲染的虚拟现实数据进行编码,得到经过编码的虚拟现实数据。S308. The first terminal encodes the virtual reality data that is rendered by the VR scene according to the determined encoding manner, to obtain the encoded virtual reality data.
可以理解的是,如果第一终端获取视频数据的同时,获取到该第一终端侧用户的观看视角,则可以将虚拟现实数据与该用户的观看视角一起进行编码,以便后续将经过编码后的虚拟现实数据以及观看视角一起发送给第二终端。It can be understood that if the first terminal acquires the viewing angle of the first terminal side user while acquiring the video data, the virtual reality data may be encoded together with the viewing angle of the user, so as to be encoded later. The virtual reality data and the viewing angle are sent together to the second terminal.
对虚拟现实数据进行编码以及发送的过程可以参见前面实施例中相关过程,在此不再 赘述。The process of encoding and transmitting virtual reality data can be referred to the related process in the previous embodiment, and is no longer Narration.
S309,第一终端将经过编码的虚拟现实数据发送给第二终端。S309. The first terminal sends the encoded virtual reality data to the second terminal.
S310,第二终端在接收到经过编码的虚拟现实数据的情况下,对该经过编码的虚拟现实数据进行解码,得到包含3D视频图像的虚拟现实数据。S310. The second terminal, after receiving the encoded virtual reality data, decodes the encoded virtual reality data to obtain virtual reality data including a 3D video image.
S311,第二终端获取当前观看该3D视频图像的指定观看视角。S311. The second terminal acquires a specified viewing angle of the current viewing of the 3D video image.
S312,第二终端按照该指定观看,对该3D视频图像进行渲染,得到以该指定观看视角所呈现出的3D视频图像。S312. The second terminal renders the 3D video image according to the specified viewing, and obtains a 3D video image presented by the specified viewing angle.
其中,该指定视角可以为预先设定的默认视角,也可以是由第二终端侧的用户预先选择或者实时选择的观看视角。The specified viewing angle may be a preset default viewing angle, or may be a viewing angle selected by a user on the second terminal side or selected in real time.
可以理解的是,在第二终端接收到的经过编码的3D环境数据的同时,如果第二终端的通信模块接收到第一终端侧用户观看该视频图像的观看视角,可以将观看视角作为默认的观看视角。It can be understood that, while the encoded 3D environment data is received by the second terminal, if the communication module of the second terminal receives the viewing angle of the video image viewed by the first terminal side user, the viewing angle can be used as a default. Watch the perspective.
S313,第二终端将以该指定观看视角所呈现出的3D视频图像输出到显示屏。S313. The second terminal outputs the 3D video image presented by the specified viewing angle to the display screen.
本实施例的方案中,第一终端将该第一终端当前播放的视频图像对应的3D图像发送给至少一个第二终端,这样,如果第一终端的用户所观看的视频图像为3D视频图像,那么在将该3D视频图像共享给第二终端,可以实现第二终端的用户体验到与第一终端的用户相同的观看体验;而如果第一终端的用户所观看的视频图像为2D视频图像,通过本申请的方案可以将第一终端侧所播放的视频图像转换为3D图像并分享给第二终端,从而实现了3D图像的分享,使得第二终端的用户可以同步观看该第一终端侧的用户所观看的视频图像所对应的虚拟现实数据。In the solution of the embodiment, the first terminal sends the 3D image corresponding to the video image currently played by the first terminal to the at least one second terminal, so that if the video image viewed by the user of the first terminal is a 3D video image, Then, sharing the 3D video image to the second terminal, the user of the second terminal can experience the same viewing experience as the user of the first terminal; and if the video image viewed by the user of the first terminal is a 2D video image, The video image played by the first terminal side can be converted into a 3D image and shared to the second terminal by using the solution of the present application, so that the sharing of the 3D image is realized, so that the user of the second terminal can simultaneously view the first terminal side. The virtual reality data corresponding to the video image viewed by the user.
另一方面,本申请还提供了一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当所述指令在终端上运行时,使得该终端执行如上任意一种共享虚拟现实数据的方法。In another aspect, the present application further provides a computer readable storage medium having stored therein instructions for causing the terminal to perform any of the sharing of virtual reality data as described above when the instruction is run on the terminal. Methods.
另一方面,本申请还提供了一种包含指令的计算机程序产品,当该计算机程序产品在终端上运行时,使得该终端执行如上所述的共享虚拟现实数据的方法。In another aspect, the present application also provides a computer program product comprising instructions for causing a terminal to perform a method of sharing virtual reality data as described above when the computer program product is run on a terminal.
另一方面,本申请还提供了一种共享虚拟现实数据的装置,该装置可以应用于前面所提到的发送虚拟现实数据的终端中。On the other hand, the present application also provides an apparatus for sharing virtual reality data, which can be applied to the aforementioned terminal for transmitting virtual reality data.
如,参见图4,其示出了本申请一种共享虚拟现实数据的装置一个实施例的组成结构示意图,本实施的装置可以包括:For example, referring to FIG. 4, it is a schematic structural diagram of an embodiment of an apparatus for sharing virtual reality data according to the present application. The apparatus of this implementation may include:
数据采集模块401,用于获取终端所处环境的环境数据,该环境数据至少包括环境图像。该环境数据可以理解为由至少一帧环境图像组成的视频数据。如,数据采集模块获取通过摄像头(采集二维图像的摄像头或者采集VR数据的摄像头)等图像采集装置采集到的该终端所处环境的环境图像。The data collection module 401 is configured to obtain environment data of an environment in which the terminal is located, where the environment data includes at least an environment image. The environmental data can be understood as video data composed of at least one frame of environment image. For example, the data acquisition module acquires an environment image of an environment in which the terminal is collected by an image acquisition device such as a camera (a camera that captures a two-dimensional image or a camera that collects VR data).
图像处理模块402,用于在该环境图像为二维图像的情况下,将该环境数据中的该环境图像由二维图像转换为三维图像,得到由该境数据转换出的虚拟现实数据,该虚拟现实数 据用于反映所述终端所处环境的三维场景。The image processing module 402 is configured to convert the environment image in the environment data from a two-dimensional image into a three-dimensional image, and obtain virtual reality data converted by the environment data, where the environment image is a two-dimensional image. Virtual reality number According to a three-dimensional scene for reflecting the environment in which the terminal is located.
数据传输模块403,用于将所述虚拟现实数据传输给至少一个接收终端。The data transmission module 403 is configured to transmit the virtual reality data to the at least one receiving terminal.
其中,该数据采集装置获取到的环境数据还可以包括:通过终端上的麦克风等采集到的音频数据,以及通过传感器感应到的用户姿态等等数据。获取到的环境数据以及用户姿态等数据可以统称为用户环境数据。其中,该用户姿态可以为用户观看该第一终端所处环境的观看视角,相应的,该数据传输模块403在传输该虚拟现实数据的同时,还可以传输该用户姿态数据,以使得接收设备可以按照该观看视角渲染所述虚拟现实数据,并输出以所述观看视角呈现出的虚拟现实数据。The environment data acquired by the data collection device may further include: audio data collected through a microphone or the like on the terminal, and user posture and the like sensed by the sensor. The acquired environmental data and user posture data may be collectively referred to as user environment data. The user gesture may be a viewing angle of the environment in which the first terminal is located, and correspondingly, the data transmission module 403 may transmit the virtual reality data while transmitting the user posture data, so that the receiving device can The virtual reality data is rendered according to the viewing angle, and the virtual reality data presented at the viewing angle is output.
在一种实现方式中,该图像处理模块在从数据采集模块获取到环境数据之后,还可以检测到采集到的数据是否为3D数据。In an implementation manner, after the image processing module acquires the environment data from the data collection module, the image processing module may further detect whether the collected data is 3D data.
如果该环境数据为2D数据,该图像处理模块在将环境数据中的环境图像转换为3D图像的过程可以为:确定2D环境数据对应的深度信息,并利用获取的深度信息和该2D环境数据构建出3D环境数据。其中,在处理有运动信息的视频时利用运动信息和帧间信息优化深度信息。If the environment data is 2D data, the process of the image processing module converting the environment image in the environment data into the 3D image may be: determining the depth information corresponding to the 2D environment data, and constructing the acquired depth information and the 2D environment data. Out of 3D environment data. Among them, the motion information and the inter-frame information are used to optimize the depth information when processing the video with motion information.
进一步的,该图像处理模块还可以对该3D环境数据进行空洞填补和优化。Further, the image processing module can also perform hole filling and optimization on the 3D environment data.
进一步的,该图像处理模块在构建出3D环境数据之后,还可以对3D环境数据进行渲染,即进行前面所提到的VR场景渲染。Further, after the image processing module constructs the 3D environment data, the 3D environment data may also be rendered, that is, the VR scene rendering mentioned above is performed.
如果图像处理模块确定出该环境数据为3D数据,则该图像处理模块还可以将该环境数据确定为虚拟现实数据,以便直接对该3D的环境数据VR场景渲染。If the image processing module determines that the environment data is 3D data, the image processing module may further determine the environment data as virtual reality data to directly render the 3D environment data VR scene.
在一种实现方式中,该装置还可以包括:数据编码模块404,用于基于终端的网络状态确定编码模式;按照该编码模式对虚拟现实数据进行编码。该数据编码模块可以根据网络状态,自适应选择最适宜的编码方式,并按着选定的编码方式对3D环境数据进行编码,以提高后续传输虚拟现实数据的速度和可靠性。In an implementation manner, the apparatus may further include: a data encoding module 404, configured to determine an encoding mode based on a network state of the terminal; and encode the virtual reality data according to the encoding mode. The data encoding module can adaptively select the most suitable coding mode according to the network state, and encode the 3D environment data according to the selected coding mode, so as to improve the speed and reliability of the subsequent transmission of the virtual reality data.
另一方面,本申请还提供了又一种共享虚拟现实数据的装置,该装置可以应用于接收该虚拟现实数据的接收终端中。In another aspect, the present application further provides another apparatus for sharing virtual reality data, the apparatus being applicable to a receiving terminal that receives the virtual reality data.
如,参见图5,其示出了又一种共享虚拟现实数据的装置一个实施例的组成结构示意图,本实施例的装置可以包括:For example, referring to FIG. 5, a schematic structural diagram of another embodiment of an apparatus for sharing virtual reality data is shown. The apparatus of this embodiment may include:
数据接收模块501,用于从发送终端接收虚拟现实数据。其中,该数据接收模块501可以通过传输协议与发送终端的数据传输模块建立传输连接。The data receiving module 501 is configured to receive virtual reality data from the sending terminal. The data receiving module 501 can establish a transmission connection with the data transmission module of the transmitting terminal by using a transmission protocol.
显示模块502,用于显示该虚拟现实数据。The display module 502 is configured to display the virtual reality data.
其中,该显示模块将虚拟现实数据所输出至的显示屏幕包括但不局限于手机屏幕,VR头盔自带显示屏幕等。The display module to which the display module outputs the virtual reality data includes but is not limited to a mobile phone screen, and the VR helmet has a display screen.
在一种实现方式中,该装置还包括数据解码模块503。In one implementation, the apparatus further includes a data decoding module 503.
如果该数据接收模块接收到的虚拟现实数据为编码后的虚拟现实数据,则该数据接收模块将该虚拟现实数据传递给数据解码模块。该数据解码模块用于对该虚拟现实数据进行解码,并将解码后的虚拟现实数据传输给显示模块进行显示。 If the virtual reality data received by the data receiving module is encoded virtual reality data, the data receiving module transmits the virtual reality data to the data decoding module. The data decoding module is configured to decode the virtual reality data, and transmit the decoded virtual reality data to the display module for display.
本申请中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。The various embodiments in the present application are described in a progressive manner, and each embodiment focuses on differences from other embodiments, and the same similar parts between the various embodiments may be referred to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant parts can be referred to the method part.
专业人员还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。A person skilled in the art will further appreciate that the elements and algorithm steps of the various examples described in connection with the embodiments disclosed herein can be implemented in electronic hardware, computer software or a combination of both, in order to clearly illustrate the hardware and software. Interchangeability, the composition and steps of the various examples have been generally described in terms of function in the above description. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods to implement the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present application.
结合本文中所公开的实施例描述的方法或算法的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。The steps of a method or algorithm described in connection with the embodiments disclosed herein can be implemented directly in hardware, a software module executed by a processor, or a combination of both. The software module can be placed in random access memory (RAM), memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or technical field. Any other form of storage medium known.
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本申请。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本申请的精神或范围的情况下,在其它实施例中实现。因此,本申请将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。 The above description of the disclosed embodiments enables those skilled in the art to make or use the application. Various modifications to these embodiments are obvious to those skilled in the art, and the general principles defined herein may be implemented in other embodiments without departing from the spirit or scope of the application. Therefore, the application is not limited to the embodiments shown herein, but is to be accorded the broadest scope of the principles and novel features disclosed herein.

Claims (25)

  1. 一种共享虚拟现实数据的方法,其特征在于,包括:A method for sharing virtual reality data, comprising:
    获取第一终端当前所处环境的环境数据,所述环境数据至少包括:第一终端所处环境的环境图像;Obtaining environment data of an environment in which the first terminal is currently located, where the environment data includes at least: an environment image of an environment in which the first terminal is located;
    在所述环境图像为二维图像的情况下,将所述环境数据中的所述环境图像由二维图像转换为三维图像,得到由所述环境数据转换出的虚拟现实数据,所述虚拟现实数据用于反映所述第一终端所处环境的三维场景;In the case that the environment image is a two-dimensional image, converting the environment image in the environment data from a two-dimensional image to a three-dimensional image, and obtaining virtual reality data converted by the environment data, the virtual reality The data is used to reflect a three-dimensional scene of the environment in which the first terminal is located;
    将所述虚拟现实数据传输给至少一个第二终端。Transmitting the virtual reality data to at least one second terminal.
  2. 根据权利要求1所述的共享虚拟现实数据的方法,其特征在于,还包括:The method of sharing virtual reality data according to claim 1, further comprising:
    在所述环境图像为三维图像的情况下,将所述环境数据确定为所述用于反映所述第一终端所处环境的三维场景的虚拟现实数据。In a case where the environment image is a three-dimensional image, the environment data is determined as the virtual reality data for reflecting a three-dimensional scene in which the first terminal is located.
  3. 根据权利要求1或2所述的共享虚拟现实数据的方法,其特征在于,在得到虚拟现实数据之后,还包括:The method for sharing virtual reality data according to claim 1 or 2, further comprising: after obtaining the virtual reality data,
    对所述虚拟现实数据中三维的环境图像进行虚拟现实场景渲染,所述虚拟现实场景渲染包括:反向畸变、反色散和瞳距调节中的一种或几种。And performing virtual reality scene rendering on the three-dimensional environment image in the virtual reality data, where the virtual reality scene rendering includes one or more of reverse distortion, inverse dispersion, and distance adjustment.
  4. 根据权利要求1或2所述的共享虚拟现实数据的方法,其特征在于,所述将所述环境数据中的所述环境图像由二维图像转换为三维图像,包括:The method of sharing virtual reality data according to claim 1 or 2, wherein the converting the environment image in the environment data from a two-dimensional image to a three-dimensional image comprises:
    为所述环境数据中的所述环境图像创建深度信息;Creating depth information for the environment image in the environmental data;
    利用所述深度信息和所述环境图像,构建所述环境图像对应的三维图像。Using the depth information and the environment image, a three-dimensional image corresponding to the environment image is constructed.
  5. 根据权利要求1所述的共享虚拟现实数据的方法,其特征在于,在所述采集第一终端当前所处环境的环境数据的同时,还包括:The method for sharing virtual reality data according to claim 1, wherein, while collecting the environmental data of the environment in which the first terminal is currently located, the method further includes:
    采集所述第一终端侧的用户对所述环境的观看视角;Collecting a viewing angle of the environment on the first terminal side of the user;
    在所述将所述虚拟现实数据传输给至少一个第二终端的同时,还包括:And the transmitting the virtual reality data to the at least one second terminal, the method further includes:
    将所述观看视角发送给所述至少一个第二终端,以便所述第二终端按照该观看视角渲染所述虚拟现实数据,并输出以所述观看视角呈现出的虚拟现实数据。And transmitting the viewing angle to the at least one second terminal, so that the second terminal renders the virtual reality data according to the viewing angle, and outputs virtual reality data presented in the viewing angle.
  6. 根据权利要求1、2或5所述的共享虚拟现实数据的方法,其特征在于,在所述将所述虚拟现实数据传输给至少一个第二终端之前,还包括:The method for sharing virtual reality data according to claim 1, 2 or 5, further comprising: before the transmitting the virtual reality data to the at least one second terminal,
    确定所述第一终端的网络状态;Determining a network status of the first terminal;
    基于所述网络状态确定编码模式;Determining an encoding mode based on the network state;
    按照所述编码模式对所述虚拟现实数据进行编码。The virtual reality data is encoded in accordance with the encoding mode.
  7. 根据权利要求1、2或5所述的共享虚拟现实数据的方法,其特征在于,所述获取第一终端当前所处环境的环境数据,包括:The method for sharing virtual reality data according to claim 1, 2 or 5, wherein the obtaining the environmental data of the environment in which the first terminal is currently located includes:
    获取当前采集到的第一终端当前所处环境的环境图像;Obtaining an environment image of the current environment where the first terminal is currently collected;
    所述在所述环境图像为二维图像的情况下,将所述环境数据中的所述环境图像由二维图像转换为三维图像,得到由所述环境数据转换出的虚拟现实数据,包括:In the case that the environment image is a two-dimensional image, the environment image in the environment data is converted from a two-dimensional image into a three-dimensional image, and the virtual reality data converted by the environment data is obtained, including:
    在获取到当前采集到的所述环境图像,且确定所述环境图像为二维图像时,将所述环境数据中的所述环境图像由二维图像转换为三维图像,得到由所述环境数据转换出的虚拟 现实数据;When the environment image that is currently acquired is acquired, and the environment image is determined to be a two-dimensional image, the environment image in the environment data is converted from a two-dimensional image into a three-dimensional image, and the environmental data is obtained. Converted virtual Realistic data;
    所述将所述虚拟现实数据传输给至少一个第二终端,包括:Transmitting the virtual reality data to the at least one second terminal includes:
    将当前时刻转换出的所述虚拟现实数据传输给至少一个第二终端。Transmitting the virtual reality data converted at the current time to the at least one second terminal.
  8. 一种共享虚拟现实数据的方法,其特征在于,包括:A method for sharing virtual reality data, comprising:
    获取第一终端待分享的目标数据,所述目标数据包括至少一帧图像;Obtaining target data to be shared by the first terminal, where the target data includes at least one frame image;
    在所述目标数据中的图像为二维图像的情况下,将所述目标数据中的图像由二维图像转换为三维图像,得到由所述目标数据转换出的虚拟现实数据,所述虚拟现实数据用于反映依据所述目标数据所构建出的三维场景;In a case where the image in the target data is a two-dimensional image, the image in the target data is converted from a two-dimensional image into a three-dimensional image, and virtual reality data converted from the target data is obtained, the virtual reality The data is used to reflect a three-dimensional scene constructed according to the target data;
    将所述虚拟现实数据传输给至少一个第二终端。Transmitting the virtual reality data to at least one second terminal.
  9. 根据权利要求8所述的共享虚拟现实数据的方法,其特征在于,所述获取第一终端侧待分享的目标数据,包括:The method for sharing the virtual reality data according to claim 8, wherein the acquiring the target data to be shared by the first terminal side comprises:
    获取第一终端当前所处环境的环境数据,所述环境数据包括:第一终端所处环境的环境图像。Obtaining environment data of an environment in which the first terminal is currently located, where the environment data includes: an environment image of an environment in which the first terminal is located.
  10. 根据权利要求8所述的共享虚拟现实数据的方法,其特征在于,所述获取第一终端待分享的目标数据,包括:The method for sharing the virtual reality data according to claim 8, wherein the acquiring the target data to be shared by the first terminal comprises:
    获取所述第一终端当前播放的视频数据,所述视频数据包括至少一帧视频图像。Obtaining video data currently played by the first terminal, where the video data includes at least one frame of video image.
  11. 根据权利要求10所述的共享虚拟现实数据的方法,其特征在于,所述获取所述第一终端当前播放的视频数据,包括:The method for sharing virtual reality data according to claim 10, wherein the acquiring the video data currently played by the first terminal comprises:
    获取所述第一终端中展现出的目标播放窗口中当前播放的视频数据,其中,所述目标播放窗口为所述第一终端中指定应用的图像输出窗口。Obtaining video data currently playing in the target play window displayed in the first terminal, where the target play window is an image output window of a specified application in the first terminal.
  12. 一种终端,其特征在于,包括:A terminal, comprising:
    图像采集装置,用于采集所述终端当前所处环境的环境图像;An image capturing device, configured to collect an environment image of an environment in which the terminal is currently located;
    数据接口,用于获取所述终端当前所处环境的环境数据,所述环境数据至少包括:所述图像采集装置采集到的所述环境图像;a data interface, configured to acquire environment data of an environment in which the terminal is currently located, where the environment data includes at least: the environment image collected by the image collection device;
    处理器,用于在所述环境图像为二维图像的情况下,将所述环境数据中的所述环境图像由二维图像转换为三维图像,得到由所述环境数据转换出的虚拟现实数据,所述虚拟现实数据用于反映所述终端所处环境的三维场景;a processor, configured to convert the environment image in the environment data from a two-dimensional image into a three-dimensional image, and obtain virtual reality data converted by the environment data, in a case where the environment image is a two-dimensional image The virtual reality data is used to reflect a three-dimensional scene of an environment in which the terminal is located;
    通信模块,用于将所述虚拟现实数据传输给至少一个接收终端。And a communication module, configured to transmit the virtual reality data to the at least one receiving terminal.
  13. 根据权利要求12所述的终端,其特征在于,所述处理器,还用于在所述环境图像为三维图像的情况下,将所述环境数据确定为所述用于反映所述终端所处环境的三维场景的虚拟现实数据。The terminal according to claim 12, wherein the processor is further configured to: when the environment image is a three-dimensional image, determine the environment data to reflect that the terminal is located Virtual reality data for 3D scenes of the environment.
  14. 根据权利要求12或13所述的终端,其特征在于,所述处理器还用于,在得到虚拟现实数据之后,对所述虚拟现实数据中三维的环境图像进行虚拟现实场景渲染,所述虚拟现实场景渲染包括:反向畸变、反色散和瞳距调节中的一种或几种。The terminal according to claim 12 or 13, wherein the processor is further configured to: after obtaining the virtual reality data, perform virtual reality scene rendering on the three-dimensional environment image in the virtual reality data, where the virtual Realistic scene rendering includes one or more of reverse distortion, inverse dispersion, and pitch adjustment.
  15. 根据权利要求12至14任一项所述的终端,其特征在于,所述处理器在将所述环境数据中的所述环境图像由二维图像转换为三维图像时,具体用于,为所述环境图像中的所述环境图像创建深度信息;利用所述深度信息和所述环境图像,构建所述环境图像对应 的三维图像。The terminal according to any one of claims 12 to 14, wherein the processor is specifically configured to use the environment image in the environment data when converting the two-dimensional image into a three-dimensional image. Creating the depth information in the environment image in the environment image; constructing the environment image corresponding to the depth information and the environment image 3D image.
  16. 根据权利要求15所述的终端,其特征在于,所述终端还包括:The terminal according to claim 15, wherein the terminal further comprises:
    传感器,用于感应所述终端侧的用户对所述环境的观看视角;a sensor, configured to sense a viewing angle of the environment of the user on the terminal side;
    所述数据接口,还用于所述传感器采集到的所述终端侧的用户对所述环境的观看视角;The data interface is further used for a viewing angle of the user on the terminal side collected by the sensor to the environment;
    所述通信模块,还用于在所述将所述虚拟现实数据传输给至少一个第二终端的同时,将所述观看视角发送给所述至少一个第二终端,以便所述第二终端按照该观看视角渲染所述虚拟现实数据,并输出以所述观看视角呈现出的虚拟现实数据。The communication module is further configured to send the viewing angle to the at least one second terminal while the virtual reality data is transmitted to the at least one second terminal, so that the second terminal according to the The viewing angle renders the virtual reality data and outputs virtual reality data presented at the viewing angle.
  17. 根据权利要求12、13或16所述的终端,其特征在于,所述处理器还用于,在所述通信模块将所述虚拟现实数据传输给至少一个接收终端之前,确定所述终端的网络状态;基于所述网络状态确定编码模式;按照所述编码模式对所述虚拟现实数据进行编码。The terminal according to claim 12, 13 or 16, wherein the processor is further configured to determine a network of the terminal before the communication module transmits the virtual reality data to at least one receiving terminal. a state; determining an encoding mode based on the network state; encoding the virtual reality data according to the encoding mode.
  18. 一种终端,其特征在于,包括:A terminal, comprising:
    数据接口,用于获取第一终端待分享的目标数据,所述目标数据包括至少一帧图像;a data interface, configured to acquire target data to be shared by the first terminal, where the target data includes at least one frame image;
    处理器,用于在所述目标数据中的图像为二维图像的情况下,将所述目标数据中的图像由二维图像转换为三维图像,得到由所述目标数据转换出的虚拟现实数据,所述虚拟现实数据用于反映依据所述目标数据所构建出的三维场景;a processor, configured to convert an image in the target data from a two-dimensional image into a three-dimensional image, and obtain virtual reality data converted by the target data, in a case where the image in the target data is a two-dimensional image The virtual reality data is used to reflect a three-dimensional scene constructed according to the target data;
    通信模块,用于将所述虚拟现实数据传输给至少一个第二终端。And a communication module, configured to transmit the virtual reality data to the at least one second terminal.
  19. 根据权利要求18所述的终端,其特征在于,所述数据接口在获取第一终端侧待分享的目标数据时,具体用于,获取第一终端当前所处环境的环境数据,所述环境数据包括:第一终端所处环境的环境图像。The terminal according to claim 18, wherein the data interface is configured to acquire environment data of an environment in which the first terminal is currently located when acquiring the target data to be shared by the first terminal, the environment data. Includes: an environmental image of the environment in which the first terminal is located.
  20. 根据权利要求18所述的终端,其特征在于,所述数据接口在获取第一终端待分享的目标数据时,具体用于获取所述第一终端当前播放的视频数据,所述视频数据包括至少一帧视频图像。The terminal according to claim 18, wherein the data interface is configured to acquire video data currently played by the first terminal, where the video data includes at least the target data to be shared by the first terminal, where the video data includes at least One frame of video image.
  21. 根据权利要求20所述的终端,其特征在于,所述数据接口在获取所述第一终端当前播放的视频数据时,具体用于,获取所述第一终端中展现出的目标播放窗口中当前播放的视频数据,其中,所述目标播放窗口为所述第一终端中指定应用的图像输出窗口。The terminal according to claim 20, wherein the data interface is configured to acquire, in the target play window displayed in the first terminal, the video data currently being played by the first terminal The played video data, wherein the target play window is an image output window of a specified application in the first terminal.
  22. 一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,其特征在于,当所述指令在终端上运行时,使得所述终端执行如权利要求1-7中任一项所述的共享虚拟现实数据的方法。A computer readable storage medium having stored therein instructions, wherein when the instructions are run on a terminal, causing the terminal to perform as claimed in any one of claims 1-7 A method of sharing virtual reality data.
  23. 一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,其特征在于,当所述指令在终端上运行时,使得所述终端执行如权利要求8-11中任一项所述的共享虚拟现实数据的方法。A computer readable storage medium having stored therein instructions, wherein when the instructions are run on a terminal, causing the terminal to perform as claimed in any one of claims 8-11 A method of sharing virtual reality data.
  24. 一种包含指令的计算机程序产品,其特征在于,当所述计算机程序产品在终端上运行时,使得所述终端执行如权利要求1-7中任一项所述的共享虚拟现实数据的方法。A computer program product comprising instructions, wherein when the computer program product is run on a terminal, the terminal is caused to perform the method of sharing virtual reality data according to any one of claims 1-7.
  25. 一种包含指令的计算机程序产品,其特征在于,当所述计算机程序产品在终端上运行时,使得所述终端执行如权利要求8-11中任一项所述的共享虚拟现实数据的方法。 A computer program product comprising instructions, wherein the computer program product, when run on a terminal, causes the terminal to perform the method of sharing virtual reality data according to any one of claims 8-11.
PCT/CN2017/087725 2016-12-27 2017-06-09 Method and device for sharing virtual reality data WO2018120657A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201780005621.0A CN108431872A (en) 2016-12-27 2017-06-09 A kind of method and apparatus of shared virtual reality data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611224693 2016-12-27
CN201611224693.8 2016-12-27

Publications (1)

Publication Number Publication Date
WO2018120657A1 true WO2018120657A1 (en) 2018-07-05

Family

ID=62706946

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/087725 WO2018120657A1 (en) 2016-12-27 2017-06-09 Method and device for sharing virtual reality data

Country Status (2)

Country Link
CN (1) CN108431872A (en)
WO (1) WO2018120657A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109992902A (en) * 2019-04-08 2019-07-09 中船第九设计研究院工程有限公司 A kind of ship's space experiencing system construction method based on virtual reality
CN111459267A (en) * 2020-03-02 2020-07-28 杭州嘉澜创新科技有限公司 Data processing method, first server, second server and storage medium
CN111931830A (en) * 2020-07-27 2020-11-13 泰瑞数创科技(北京)有限公司 Video fusion processing method and device, electronic equipment and storage medium
CN113452896A (en) * 2020-03-26 2021-09-28 华为技术有限公司 Image display method and electronic equipment
CN113873313A (en) * 2021-09-22 2021-12-31 乐相科技有限公司 Virtual reality picture sharing method and device
CN114554276A (en) * 2020-11-26 2022-05-27 中移物联网有限公司 Method, device and system for sharing content between devices
CN115396474A (en) * 2022-08-30 2022-11-25 重庆长安汽车股份有限公司 Vehicle-end virtual reality data transmission system and method

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111552376A (en) * 2020-03-19 2020-08-18 恒润博雅应急科技有限公司 VR natural disaster scene interaction device and method thereof
CN111612919A (en) * 2020-06-19 2020-09-01 中国人民解放军国防科技大学 Multidisciplinary split-screen synchronous display method and system of digital twin aircraft
CN112492231B (en) * 2020-11-02 2023-03-21 重庆创通联智物联网有限公司 Remote interaction method, device, electronic equipment and computer readable storage medium
CN117915117B (en) * 2024-01-25 2024-08-06 太一云境技术有限公司 Accompanying communication care method, system, computer equipment and storage medium thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103269423A (en) * 2013-05-13 2013-08-28 浙江大学 Expandable three-dimensional display remote video communication method
CN104866261A (en) * 2014-02-24 2015-08-26 联想(北京)有限公司 Information processing method and device
CN105913715A (en) * 2016-06-23 2016-08-31 同济大学 VR sharable experimental system and method applicable to building environmental engineering study
CN106060528A (en) * 2016-08-05 2016-10-26 福建天泉教育科技有限公司 Method and system for enhancing reality based on mobile phone side and electronic whiteboard
WO2016191051A1 (en) * 2015-05-28 2016-12-01 Microsoft Technology Licensing, Llc Shared tactile interaction and user safety in shared space multi-person immersive virtual reality

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101754036A (en) * 2008-12-19 2010-06-23 聚晶光电股份有限公司 Two-dimensional/three-dimensional image imaging device, control method and three-dimensional image displaying method
KR20120019728A (en) * 2010-08-26 2012-03-07 엘지전자 주식회사 Apparatus for displaying image and method for operating the same
CN102164265B (en) * 2011-05-23 2013-03-13 宇龙计算机通信科技(深圳)有限公司 Method and system of three-dimensional video call
CN103135754B (en) * 2011-12-02 2016-05-11 深圳泰山体育科技股份有限公司 Adopt interactive device to realize mutual method
CN103634563A (en) * 2012-08-24 2014-03-12 中兴通讯股份有限公司 Video conference display method and device
CN103489214A (en) * 2013-09-10 2014-01-01 北京邮电大学 Virtual reality occlusion handling method, based on virtual model pretreatment, in augmented reality system
JP2015192436A (en) * 2014-03-28 2015-11-02 キヤノン株式会社 Transmission terminal, reception terminal, transmission/reception system and program therefor
JP5997824B1 (en) * 2015-11-10 2016-09-28 株式会社オプティム Remote terminal, remote instruction method, and program for remote terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103269423A (en) * 2013-05-13 2013-08-28 浙江大学 Expandable three-dimensional display remote video communication method
CN104866261A (en) * 2014-02-24 2015-08-26 联想(北京)有限公司 Information processing method and device
WO2016191051A1 (en) * 2015-05-28 2016-12-01 Microsoft Technology Licensing, Llc Shared tactile interaction and user safety in shared space multi-person immersive virtual reality
CN105913715A (en) * 2016-06-23 2016-08-31 同济大学 VR sharable experimental system and method applicable to building environmental engineering study
CN106060528A (en) * 2016-08-05 2016-10-26 福建天泉教育科技有限公司 Method and system for enhancing reality based on mobile phone side and electronic whiteboard

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109992902A (en) * 2019-04-08 2019-07-09 中船第九设计研究院工程有限公司 A kind of ship's space experiencing system construction method based on virtual reality
CN111459267A (en) * 2020-03-02 2020-07-28 杭州嘉澜创新科技有限公司 Data processing method, first server, second server and storage medium
CN113452896A (en) * 2020-03-26 2021-09-28 华为技术有限公司 Image display method and electronic equipment
CN113452896B (en) * 2020-03-26 2022-07-22 华为技术有限公司 Image display method and electronic equipment
US12056412B2 (en) 2020-03-26 2024-08-06 Huawei Technologies Co., Ltd. Image display method and electronic device
CN111931830A (en) * 2020-07-27 2020-11-13 泰瑞数创科技(北京)有限公司 Video fusion processing method and device, electronic equipment and storage medium
CN111931830B (en) * 2020-07-27 2023-12-29 泰瑞数创科技(北京)股份有限公司 Video fusion processing method and device, electronic equipment and storage medium
CN114554276A (en) * 2020-11-26 2022-05-27 中移物联网有限公司 Method, device and system for sharing content between devices
CN114554276B (en) * 2020-11-26 2023-12-12 中移物联网有限公司 Method, device and system for sharing content between devices
CN113873313A (en) * 2021-09-22 2021-12-31 乐相科技有限公司 Virtual reality picture sharing method and device
CN113873313B (en) * 2021-09-22 2024-03-29 乐相科技有限公司 Virtual reality picture sharing method and device
CN115396474A (en) * 2022-08-30 2022-11-25 重庆长安汽车股份有限公司 Vehicle-end virtual reality data transmission system and method

Also Published As

Publication number Publication date
CN108431872A (en) 2018-08-21

Similar Documents

Publication Publication Date Title
WO2018120657A1 (en) Method and device for sharing virtual reality data
US11803055B2 (en) Sedentary virtual reality method and systems
WO2020192458A1 (en) Image processing method and head-mounted display device
KR20170029002A (en) Invisible optical label for transmitting information between computing devices
CN106576158A (en) Immersive video
CN110537208B (en) Head-mounted display and method
US10572764B1 (en) Adaptive stereo rendering to reduce motion sickness
WO2018205878A1 (en) Method for transmitting video information, terminal, server and storage medium
CN113014960B (en) Method, device and storage medium for online video production
WO2022241981A1 (en) Head-mounted display device and head-mounted display system
CN103797805A (en) Media encoding using changed regions
CN111989914A (en) Remote presentation device operating method
US20150244984A1 (en) Information processing method and device
CN111510757A (en) Method, device and system for sharing media data stream
JP2020115299A (en) Virtual space information processing device, method and program
US20220172440A1 (en) Extended field of view generation for split-rendering for virtual reality streaming
CN116027895A (en) Virtual content interaction method, device, equipment and storage medium
KR101784095B1 (en) Head-mounted display apparatus using a plurality of data and system for transmitting and receiving the plurality of data
KR102261739B1 (en) System and method for adaptive streaming of augmented reality media content
CN115623156B (en) Audio processing method and related device
CN114416237B (en) Display state switching method, device and system, electronic equipment and storage medium
WO2021249562A1 (en) Information transmission method, related device, and system
Tychkov et al. Virtual reality in information transfer
CN115049543A (en) Ultra-clear facial image reconstruction method and device and mobile terminal
EP4428805A1 (en) Dynamic range mapping method and apparatus for panoramic video

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17888633

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17888633

Country of ref document: EP

Kind code of ref document: A1