Nothing Special   »   [go: up one dir, main page]

WO2020063172A1 - 数据处理方法、终端、服务器和存储介质 - Google Patents

数据处理方法、终端、服务器和存储介质 Download PDF

Info

Publication number
WO2020063172A1
WO2020063172A1 PCT/CN2019/100650 CN2019100650W WO2020063172A1 WO 2020063172 A1 WO2020063172 A1 WO 2020063172A1 CN 2019100650 W CN2019100650 W CN 2019100650W WO 2020063172 A1 WO2020063172 A1 WO 2020063172A1
Authority
WO
WIPO (PCT)
Prior art keywords
depth
depth images
image
terminal
module
Prior art date
Application number
PCT/CN2019/100650
Other languages
English (en)
French (fr)
Inventor
夏炀
李虎
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2020063172A1 publication Critical patent/WO2020063172A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/194Transmission of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps

Definitions

  • the embodiment of the present application is based on a Chinese patent application with an application number of 201811162631.8 and an application date of September 30, 2018, and claims the priority of the Chinese patent application.
  • the entire content of the Chinese patent application is incorporated herein by reference Application examples.
  • the present application relates to the field of image processing technology, and relates to, but is not limited to, a data processing method, a terminal, a server, and a storage medium.
  • the three-dimensional video data includes two-dimensional image data (for example, Red Green Blue (RGB) data) and depth data (Depth data), and the three-dimensional video data is transmitted by transmitting two-dimensional video data and depth data, respectively.
  • RGB Red Green Blue
  • Depth data depth data
  • the relationship between the speckle image signal-to-noise ratio, depth accuracy, and speckle density is used to design and encode a multi-password structured light pattern.
  • the projected structured light pattern and the modulated image are used to obtain corresponding matching points.
  • the offset of the points before and after the projection calculates the depth value of the matching point to achieve the depth data acquisition of the object to be measured. In this way, the accuracy of the obtained depth information is insufficient.
  • the embodiments of the present application provide a data processing method, a terminal, a server, and a storage medium.
  • An embodiment of the present application provides a data processing method.
  • the method includes:
  • N is a positive integer greater than 1;
  • N depth images Send the N depth images to a Mobile Edge Computing (MEC) server, where the N depth images are used by the MEC server to determine the N depth images from the N depth images.
  • MEC Mobile Edge Computing
  • the method further includes:
  • the transmitting N different coded lights includes:
  • N coded lights of different shapes and / or textures are emitted.
  • the obtaining N coded light sequences includes:
  • N different coded light sequences are generated based on different coding modes.
  • the sending N depth images to a mobile edge computing MEC server includes:
  • An embodiment of the present application further provides a data processing method, an application and a MEC server, the method includes:
  • a frame of three-dimensional video data sent by a receiving terminal where the frame of three-dimensional video data includes a two-dimensional 2D image and N depth images; N is a positive integer greater than 1;
  • the determining the depth information matching the 2D image according to the N depth images includes at least one of the following:
  • one depth image matching the 2D image is generated.
  • An embodiment of the present application further provides a terminal, where the terminal includes: a transmitting module, a first acquiring module, and a first sending module;
  • the transmitting module is configured to transmit N different coded lights; wherein N is a positive integer greater than 1;
  • the first obtaining module is configured to obtain N depth images of one frame of three-dimensional video data based on the N different coded lights;
  • the first sending module is configured to send N depth images to a mobile edge computing MEC server, wherein the N depth images are configured for the MEC server to determine from the N depth images.
  • the terminal further includes:
  • a second acquisition module configured to obtain N different coded light sequences
  • the transmitting module includes:
  • the first transmitting unit is configured to transmit N coded lights of different shapes and / or textures according to the N different coded light sequences.
  • the second acquisition module includes:
  • the first generating unit is configured to generate N different coded light sequences based on different coding modes.
  • the first sending module includes:
  • the first sending unit is configured to send the N depth images to the MEC server through time division multiplexing.
  • An embodiment of the present application further provides a MEC server, where the server includes: a first receiving module, a first determining module, and a first establishing module;
  • the first receiving module is configured to receive a frame of three-dimensional video data sent by a terminal, where the frame of three-dimensional video data includes a two-dimensional 2D image and N depth images; N is a positive integer greater than 1;
  • the first determining module is configured to determine a depth image matching the 2D image according to the N depth images;
  • the first establishing module is configured to establish a three-dimensional video based on the 2D image and the determined depth image.
  • the first determining module includes at least one of the following:
  • a first selection unit configured to select one of the depth images that matches the 2D image from the N depth images
  • a second generating unit is configured to generate one depth image matching the 2D image based on the N depth images.
  • An embodiment of the present application further provides a computer storage medium that stores computer instructions that, when executed by a processor, implement the steps of the data processing method applied to a terminal according to the embodiments of the present application; or the instructions are When the processor executes, the steps of the data processing method applied to the MEC server according to the embodiments of the present application are implemented.
  • An embodiment of the present application further provides a terminal including a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • the processor executes the program, the application described in the embodiment of the present application is implemented. Steps of the terminal-based data processing method.
  • An embodiment of the present application further provides a MEC server including a memory, a processor, and a computer program stored on the memory and executable on the processor.
  • the processor executes the program, the processor implements the program described in the embodiment of the present application. Steps of a data processing method applied to a MEC server.
  • the embodiments of the present application provide a data processing method, terminal, server, and storage medium.
  • N different coded lights are transmitted; where N is a positive integer greater than 1, and then based on the N different codes
  • Light to obtain N depth images of one frame of three-dimensional video data; finally, send the N depth images to a mobile edge computing MEC server, where the N depth images are used by the MEC server to
  • the N depth images determine a depth image that matches the two-dimensional 2D image in the three-dimensional video data.
  • the server passes Combine the depth values in multiple depth images to make the obtained depth values more accurate.
  • FIG. 1 is a schematic diagram of a system architecture applied to a data processing method according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of an implementation process of a data processing method according to an embodiment of the present application
  • FIG. 3 is an implementation interaction diagram of a data processing method according to an embodiment of the present application.
  • FIG. 4A is a schematic structural diagram of a terminal according to an embodiment of the present application.
  • FIG. 4B is another schematic structural diagram of a terminal according to an embodiment of the present application.
  • 4C is another schematic structural diagram of a terminal according to an embodiment of the present application.
  • 4D is a schematic structural diagram of still another composition of a terminal according to an embodiment of the present application.
  • 5A is a schematic structural diagram of a server according to an embodiment of the present application.
  • FIG. 5B is another schematic structural diagram of a server according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a hardware composition and structure of a data processing device according to an embodiment of the present application.
  • the data processing method in the embodiment of the present application is applied to a service related to three-dimensional video data.
  • the service is, for example, a service for sharing three-dimensional video data, or a live broadcast service based on three-dimensional video data.
  • the transmitted depth values and two-dimensional video data require higher technical support in the data processing process, so the mobile communication network needs a faster data processing rate. , And a more stable data processing environment.
  • FIG. 1 is a schematic diagram of a system architecture applied to a data processing method according to an embodiment of the present application.
  • the system may include a terminal, a base station, a MEC server, a service processing server, a core network, and the Internet, etc .; a MEC server and services High-speed channels are established between the processing servers through the core network to achieve data synchronization.
  • MEC server A is a MEC server deployed near terminal A (sender), and core network A is the core network in the area where terminal A is located; B is a MEC server deployed near terminal B (receiving end), and core network B is the core network in the area where terminal B is located; MEC server A and MEC server B can communicate with the service processing server through core network A and core network B, respectively. Establish high-speed channels for data synchronization.
  • the MEC server A synchronizes the data to the service processing server through the core network A; and the MEC server B obtains the 3D video data sent by the terminal A from the service processing server. And send it to terminal B for presentation.
  • terminal B and terminal A use the same MEC server to implement transmission, then terminal B and terminal A directly implement three-dimensional video data transmission through one MEC server without the participation of a service processing server.
  • This method is called local Way back. Specifically, it is assumed that the terminal B and the terminal A realize the transmission of three-dimensional video data through the MEC server A. After the three-dimensional video data sent by the terminal A is processed to the MEC server A, the three-dimensional video data is sent by the MEC server A to the terminal B for presentation.
  • the terminal may select an evolved base station (eNB) that accesses a 4G network or a next-generation evolved base station (gNB) that accesses a 5G network based on the network situation, or the configuration of the terminal itself, or an algorithm configured by itself.
  • eNB evolved base station
  • gNB next-generation evolved base station
  • the eNB is connected to the MEC server through a Long Term Evolution (LTE) access network, so that the gNB is connected to the MEC server through the Next Generation Access Network (NG-RAN).
  • LTE Long Term Evolution
  • NG-RAN Next Generation Access Network
  • the MEC server is deployed on the edge of the network near the terminal or the source of the data.
  • the so-called near the terminal or the source of the data is not only at the logical location, but also geographically close to the terminal or the source of the data.
  • multiple MEC servers can be deployed in one city. For example, in an office building with many users, a MEC server can be deployed near the office building.
  • the MEC server as an edge computing gateway with core capabilities of converged networks, computing, storage, and applications, provides platform support for edge computing including device domains, network domains, data domains, and application domains. It connects various types of smart devices and sensors, and provides smart connection and data processing services nearby, allowing different types of applications and data to be processed in the MEC server, realizing business real-time, business intelligence, data aggregation and interoperation, security and privacy protection, etc. Key intelligent services effectively improve the intelligent decision-making efficiency of the business.
  • FIG. 2 is a schematic flowchart of a data processing method according to an embodiment of the present application. As shown in FIG. 2, the method includes the following steps:
  • step S201 N different coded lights are emitted.
  • N is a positive integer greater than 1.
  • the step S201 can be understood as transmitting N coded lights of different shapes and / or textures according to the N different coded light sequences. It is also possible to generate N different coded light sequences based on different coding methods, for example, encoding using M-ary coding, two-gray-level coding, phase-shifting coding, etc., to generate different N consecutive sequences.
  • Coded light; M is greater than or equal to 2.
  • the different coded light may be: structured light of different shapes and / or textures projected by the depth camera toward the collection object, and the structured light may be non-visible light, so as not to interfere with the collection of two-dimensional images based on visible light imaging.
  • the different coded lights may be coded lights of different wavelengths. At this time, at least two depth images corresponding to the two-dimensional image may be acquired at the same time or may not be acquired at the same time.
  • step S202 N depth images of one frame of three-dimensional video data are obtained based on the N different coded lights.
  • the terminal uses N different coded lights, that is, structured light of different shapes and / or textures, which are projected onto the collection object. Due to the unevenness of the surface of the collection object and the distance from the depth camera, the depth camera is based on structured light collection. The depth image is different in combination with the shape and / or texture of the projected structured light, and the comparison of the shape and / or texture presented in the image of the collected structured light can determine the depth value of each structured light projection point And constructing a depth image based on the depth value. Since at least two depth images are collected using different structured light, the terminal can obtain at least two depth values at the same position of the collection target according to combining at least two depth images, thereby improving the accuracy of the depth value.
  • Step S203 Send the N depth images to a MEC server.
  • the N depth images are used for the MEC server to determine, from the N depth images, a depth image that matches a two-dimensional 2D image in the one-dimensional three-dimensional video data.
  • the MEC server can obtain at least two depth images corresponding to the two-dimensional image from the terminal, and can combine the depth values in the two depth images to obtain a more accurate depth value, or reduce the A phenomenon in which some objects in a two-dimensional image lack a depth value in one frame of the three-dimensional video data.
  • N depth images of one frame of three-dimensional video data are collected, so that one frame to be collected corresponds to multiple depth images. In this way, according to multiple depth images, it can be more Determining the depth value of the 3D video data of the frame accurately.
  • the obtaining N depth images of one frame of three-dimensional video data includes that the terminal obtains N depth images from an acquisition component capable of acquiring depth data at least, and the acquisition component can A communication link is established with at least one terminal so that a corresponding terminal obtains the three-dimensional video data.
  • the terminal since the acquisition component capable of acquiring depth data is relatively expensive, the terminal does not have the function of acquiring three-dimensional video data, but collects three-dimensional video data through an acquisition component independent of the terminal, and then acquires the data through the acquisition component and the terminal.
  • the communication component establishes a communication link, so that the terminal obtains the three-dimensional video data collected by the acquisition component.
  • the acquisition component may be implemented by at least one of the following: a depth camera, a binocular camera, a 3D structured light camera module, and a time of flight (TOF) camera module.
  • the acquisition component can establish a communication link with at least one terminal to process the acquired three-dimensional video data to the at least one terminal, so that the corresponding terminal obtains the three-dimensional video data, so that the three-dimensional video data collected by one acquisition component can be shared. To at least one terminal, so as to realize the sharing of the collection components.
  • the terminal itself has a function of acquiring three-dimensional video data.
  • the terminal is provided with an acquisition component capable of acquiring at least depth data, for example, at least one of the following components: a depth camera, a binocular camera, and a 3D structure.
  • Optical camera module and TOF camera module to collect three-dimensional video data.
  • the obtained three-dimensional video data includes two-dimensional video data and depth data.
  • the two-dimensional video data is used to represent a planar image, and may be RGB data, for example.
  • the depth data represents the surface of the acquisition object targeted by the acquisition component and the acquisition component. Distance.
  • FIG. 3 is an implementation interaction diagram of the data processing method of the embodiment of the present application. As shown in FIG. 3, the method includes the following steps:
  • Step S301 The terminal transmits N coded lights of different shapes and / or textures according to the N different coded light sequences.
  • the N different coded light sequences may include: N different coded light sequences generated based on a same coding mode.
  • the step S301 may be that the terminal generates N different coded light sequences based on different coding modes.
  • Step S302 The terminal obtains N depth images of one frame of three-dimensional video data based on the N different coded lights.
  • the step S302 can also be understood as: projecting different encoded light of the N consecutive sequences onto the same acquisition target in the three-dimensional video data to be acquired to obtain N for describing the same acquisition target.
  • Continuous depth images That is, the same acquisition target corresponds to one two-dimensional image and N depth images.
  • the terminal can obtain at least two depth values at the same position of the collection target according to combining at least two depth images, thereby improving the accuracy of the depth value.
  • the depth information contained in the N depth images is different, so the terminal can obtain more accurate depth information according to the N depth images.
  • Step S303 The terminal sends the N depth images to the MEC server through time division multiplexing.
  • step S303 can be understood as sending the N depth images to the MEC server at different time points to avoid situations such as network congestion.
  • Step S304 The server receives one frame of three-dimensional video data sent by the terminal.
  • the one-dimensional three-dimensional video data includes a two-dimensional image and N depth images; N is a positive integer greater than 1.
  • Step S305 The server determines a depth image that matches the 2D image according to the N depth images.
  • step S305 may be performed in the following two ways, way one:
  • the server selects one of the depth images that matches the 2D image from the N depth images. That is, the server selects one or more depth images containing the most comprehensive or accurate depth information from the N depth images as one of the depth images matching the 2D image. In this embodiment, the server selects a depth image containing the most comprehensive or accurate depth information from the N depth images, which can be implemented by the following two methods:
  • the first method is: The server judges whether the depth values corresponding to the N depth images are missing one by one, or whether the depth value contains enough feature points for matching with the 2D image. If there is a depth image, the depth value contains Complete, then the depth image can be used as one of the depth images matching the 2D image; for example, there are a total of 10 depth images. If the depth value contained in the fourth depth image of the 10 depth images is not missing, Then, the fourth depth image is used as a depth image matching the 2D image.
  • the second method is: the server judges whether the depth values corresponding to the N depth images have abnormal values one by one, such as being too large, too small, or far from the preset depth value, etc., then there will be depth images with abnormal values It is excluded that only depth images in which there are no outliers are retained, and a depth image containing enough feature points for matching the 2D image is selected from these depth images as one of the depth images matching the 2D image.
  • one depth image matching the 2D image is generated according to the N depth images. That is, the service device combines the N depth images to obtain a depth image containing depth values corresponding to the N depth images, as a depth image matching the 2D image. For example, the above 10 depth images are combined into one depth image, so that the depth information obtained by this combination is very rich in depth information, so the accuracy of the depth information is also improved.
  • Step S306 The server establishes a three-dimensional video based on the 2D image and the determined depth image.
  • the MEC server may obtain at least two depth images corresponding to the two-dimensional image from the terminal, and may combine the depth values in the two depth images to obtain a more accurate depth value, or reduce some objects in the two-dimensional image in one frame.
  • the phenomenon of missing depth values in three-dimensional video data is described.
  • the terminal sends multiple depth images of one frame of three-dimensional video data to the MEC server, so that the MEC server can obtain more accurate depth values by combining the depth values in the multiple depth images.
  • An embodiment of the present application provides a data processing method. Projects are projected onto three-dimensional video data to be acquired using N consecutive serially-coded different lights to obtain N consecutive depth images corresponding to the three-dimensional video data. The receiving end then identifies each coded point based on the N consecutive depth images received. Then, the depth values corresponding to the N consecutive depth images are determined according to these coding points.
  • the different coded lights may be: coded lights of the same wavelength; at this time, at least two depth images corresponding to the two-dimensional image may be time-division multiplexed in the same frame of three-dimensional video.
  • the data is collected at different times in the data collection process; thereby realizing time-division multiplexing of depth images.
  • different coded light is used to emit differently.
  • the coded light is reflected to form reflected light after encountering the collection target.
  • the depth can be calculated based on the emission time of the emitted light, the reflection time of the reflected light, and the light propagation speed. value.
  • the coded light of different shapes and / or textures may make the light transmission path inconsistent, so that when one coded light is blocked, the reflected light may be formed after the other coded light is emitted, so as to collect the depth value; or
  • the two coded lights are reflected to form the reflected light and reach the receiver of the depth camera, and two depth images can also be generated separately.
  • coded light of different wavelengths two depth images can be collected simultaneously; if coded light of the same wavelength is used, time division multiplexing can be used for processing.
  • the depth information obtained under different coded light is transmitted to the MEC in a time-division manner through time division multiplexing for the MEC to obtain accurate depth information.
  • the embodiment of the present application further provides a terminal.
  • 4A is a schematic structural diagram of a structure of a terminal according to an embodiment of the present application; as shown in FIG. 4A, the terminal 40 includes: a transmitting module 41, a first obtaining module 42, and a first sending module 43;
  • the transmitting module 41 is configured to transmit N different coded lights; wherein N is a positive integer greater than 1;
  • the first obtaining module 42 is configured to obtain N depth images of one frame of three-dimensional video data based on the N different coded lights;
  • the first sending module 43 is configured to send N depth images to a mobile edge computing MEC server, wherein the N depth images are configured for the MEC server to send N depth images from the N depth images.
  • a depth image matching a two-dimensional 2D image in the one-dimensional three-dimensional video data is determined.
  • the terminal 40 further includes:
  • a second obtaining module 44 configured to obtain N different coded light sequences
  • the transmitting module 41 includes:
  • the first transmitting unit 401 is configured to transmit N coded lights of different shapes and / or textures according to the N different coded light sequences.
  • the second obtaining module 44 includes:
  • the first generating unit 402 is configured to generate N different coded light sequences based on different coding modes.
  • the first sending module 43 includes:
  • the first sending unit 403 is configured to send N depth images to the MEC server through time division multiplexing.
  • the first sending module 43 in the terminal may be implemented by a processor in the terminal, such as a central processing unit (CPU, Central Processing Unit), and a digital signal processor (DSP, Digital (Signal Processor), Microcontroller Unit (MCU, Microcontroller Unit) or FPGA (Field-Programmable Gate Array), etc .;
  • the first sending module 43 in the terminal can be used in the actual application through the communication module (Including: basic communication suite, operating system, communication module, standardized interface and protocol, etc.) and transceiver antenna implementation;
  • the transmitting module 41 in the terminal can be implemented by a stereo camera, a binocular camera, or a structured light camera in practical applications Or, it can be implemented through a communication module (including: basic communication suite, operating system, communication module, standardized interface and protocol, etc.) and a transmitting and receiving antenna;
  • the first acquisition module 42 in the terminal can be implemented by a processor such as CPU, DSP, MCU or FPGA are combined with communication modules.
  • the terminal provided in the foregoing embodiment only uses the division of the foregoing program modules as an example when performing data processing.
  • the above processing can be allocated by different program modules according to needs, that is, the terminal
  • the internal structure is divided into different program modules to complete all or part of the processing described above.
  • the terminal and the data processing method embodiments provided in the foregoing embodiments belong to the same concept. For specific implementation processes, refer to the method embodiments, and details are not described herein again.
  • FIG. 5A is a schematic structural diagram of a server according to an embodiment of the present application; as shown in FIG. 5A, the server 50 includes a first receiving module 51, a first determining module 52, and a first establishing module 53;
  • the first receiving module 51 is configured to receive a frame of three-dimensional video data sent by a terminal, where the frame of three-dimensional video data includes a two-dimensional 2D image and N depth images; N is a positive integer greater than 1;
  • the first determining module 52 is configured to determine a depth image matching the 2D image according to the N depth images;
  • the first establishing module 53 is configured to establish a three-dimensional video based on the 2D image and the determined depth image.
  • the first determining module 52 includes at least one of the following:
  • a first selection unit 501 configured to select one of the depth images matching the 2D image from the N depth images
  • the second generating unit 502 is configured to generate one depth image matching the 2D image according to the N depth images.
  • the second data processing unit 52 in the server may be implemented by a processor in the server, such as a CPU, a DSP, an MCU, or an FPGA, in actual applications; the second communication in the server
  • the unit 51 may be implemented by a communication module (including: a basic communication suite, an operating system, a communication module, a standardized interface and a protocol, etc.) and a transmitting and receiving antenna in practical applications.
  • the server provided by the foregoing embodiment only uses the division of the foregoing program modules as an example for data processing.
  • the above-mentioned transmission and distribution can be completed by different program modules according to needs, that is, the server The internal structure is divided into different program modules to complete all or part of the processing described above.
  • the server and the data processing method embodiments provided in the foregoing embodiments belong to the same concept. For specific implementation processes, refer to the method embodiments, and details are not described herein again.
  • FIG. 6 is a schematic diagram of a hardware composition structure of the data processing device according to the embodiment of the present application.
  • the data processing device 60 includes a memory. 61.
  • the processor located at the terminal executes the program to implement: transmitting N different coded lights; wherein N is a positive integer greater than 1; based on the N different Coded light to obtain N depth images of one frame of three-dimensional video data; send N said depth images to a mobile edge computing MEC server, wherein said N said depth images are used by said MEC server
  • the N depth images determine a depth image that matches a two-dimensional 2D image in the one-dimensional three-dimensional video data.
  • the processor at the terminal executes the program, it is implemented: obtaining N different coded light sequences; and transmitting N coded lights of different shapes and / or textures according to the N different coded light sequences. .
  • N different coded light sequences are generated based on different coding modes.
  • the processor located at the terminal executes the program, it is realized that: the N depth images are sent to the MEC server through time division multiplexing.
  • the processor located on the server executes the program when the processor executes the program: receiving a frame of three-dimensional video data sent by the terminal, where the frame of three-dimensional video data includes two-dimensional 2D Images and N depth images; N is a positive integer greater than 1; a depth image matching the 2D image is determined according to the N depth images; and a three-dimensional video is established based on the 2D image and the determined depth image.
  • the processor located on the server executes the program, it is implemented: selecting one of the depth images matching the 2D image from the N depth images; and generating the depth images based on the N depth images. One of the depth images matching the 2D image.
  • the data processing device further includes a communication interface 63; various components in the data processing device (terminal or server) are coupled together through a bus system. Understandably, the bus system is configured to enable connection and communication between these components. In addition to the data bus, the bus system also includes a power bus, a control bus, and a status signal bus.
  • An embodiment of the present application further provides a computer storage medium that stores computer instructions that, when executed by a processor, implement the steps of the data processing method applied to a terminal according to the embodiments of the present application; or the instructions are When the processor executes, the steps of the data processing method applied to the MEC server according to the embodiments of the present application are implemented.
  • An embodiment of the present application further provides a terminal including a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • the processor executes the program, the application described in the embodiment of the present application is implemented. Steps of the terminal-based data processing method.
  • An embodiment of the present application further provides a MEC server including a memory, a processor, and a computer program stored on the memory and executable on the processor.
  • the processor executes the program, the processor implements the program described in the embodiment of the present application. Steps of a data processing method applied to a MEC server.
  • the disclosed method and smart device may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the unit is only a logical function division.
  • there may be another division manner such as multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed components are coupled, or directly coupled, or communicated with each other through some interfaces.
  • the indirect coupling or communication connection of the device or unit may be electrical, mechanical, or other forms. of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, which may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into a second processing unit, or each unit may be separately used as a unit, or two or more units may be integrated into a unit;
  • the above integrated unit may be implemented in the form of hardware, or in the form of hardware plus software functional units.
  • the foregoing program may be stored in a computer-readable storage medium.
  • the program is executed, the program is executed.
  • the method includes the steps of the foregoing method embodiment.
  • the foregoing storage medium includes: various types of media that can store program codes, such as a mobile storage device, a ROM, a RAM, a magnetic disk, or an optical disc.
  • the above-mentioned integrated unit of the present application is implemented in the form of a software functional module and sold or used as an independent product, it may also be stored in a computer-readable storage medium.
  • the computer software product is stored in a storage medium and includes several instructions for A computer device (which may be a personal computer, a server, or a mobile phone) is caused to execute all or part of the methods described in the embodiments of the present application.
  • the foregoing storage medium includes: various types of media that can store program codes, such as a mobile storage device, a ROM, a RAM, a magnetic disk, or an optical disc.
  • N different coded lights are emitted; wherein N is a positive integer greater than 1, and then, based on the N different coded lights, N depth images of one frame of three-dimensional video data are obtained; Finally, the N depth images are sent to a mobile edge computing MEC server, where the N depth images are configured for the MEC server to determine from the N depth images and the 3D video data. Two-dimensional 2D image matching depth image; thus, when the depth image is collected through multiple different coded lights to determine the depth value of this frame of three-dimensional video, the server combines the depth values in the multiple depth images to make the obtained The depth value is more accurate.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

本申请实施例公开了一种数据处理方法,所述方法包括:发射N个不同的编码光;其中,N为大于1的正整数;基于所述N个不同的编码光,获得一帧三维视频数据的N个深度图像;将N个所述深度图像发送给移动边缘计算MEC服务器,其中,所述N个所述深度图像,用于供所述MEC服务器从N个所述深度图像确定与所述一帧三维视频数据中二维2D图像匹配的深度图像。本申请的实施例同时公开了一种服务器和存储介质。

Description

数据处理方法、终端、服务器和存储介质
相关申请的交叉引用
本申请实施例基于申请号为201811162631.8、申请日为2018年09月30日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以引入方式并入本申请实施例。
技术领域
本申请涉及图像处理技术领域,涉及但不限于一种数据处理方法、终端、服务器和存储介质。
背景技术
随着移动通信网络的不断发展,移动通信网络的传输速率飞速提高,从而给三维视频业务的产生和发展提供了有力的技术支持。三维视频数据包括二维图像数据(例如红绿蓝(Red Green Blue,RGB)数据)和深度数据(Depth数据),而三维视频数据的传输是分别传输二维视频数据和深度数据。目前,利用散斑图像信噪比和深度精度与散斑密度之间的关系,设计编码多密码结构光图案,最后利用投射结构光图案与拍摄调制后的图像得到对应匹配点,并根据该匹配点在投射前后的偏移量计算匹配点的深度值,实现待测物体的深度数据获取,如此,使得到的深度信息的精度不够。
发明内容
本申请实施例提供了一种数据处理方法、终端、服务器和存储介质。
本申请实施例的技术方案是这样实现的:
本申请实施例提供一种数据处理方法,所述方法包括:
发射N个不同的编码光;其中,N为大于1的正整数;
基于所述N个不同的编码光,获得一帧三维视频数据的N个深度图像;
将N个所述深度图像发送给移动边缘计算(Mobile Edge Computing,MEC)服务器,其中,所述N个所述深度图像,用于供所述MEC服务器从N个所述深度图像确定与所述三维视频数据中二维(Second Dimension,2D)图像匹配的深度图像。
在上述方案中,所述方法还包括:
获得N个不同的编码光序列;
对应地,所述发射N个不同的编码光,包括:
根据N个不同的所述编码光序列,发射N个不同形状和/或纹理的编码光。
在上述方案中,所述获得N个编码光序列,包括:
基于不同的编码方式生成N个不同的所述编码光序列。
在上述方案中,所述将N个所述深度图像发送给移动边缘计算MEC服务器,包括:
通过时分复用将N个所述深度图像发送给所述MEC服务器。
本申请实施例还提供了一种数据处理方法,应用与MEC服务器,所述方法包括:
接收终端发送的一帧三维视频数据,其中,所述一帧三维视频数据包含有二维2D图像及N个深度图像;N为大于1的正整数;
根据N个所述深度图像确定与所述2D图像匹配的深度图像;
基于所述2D图像及确定的所述深度图像,建立三维视频。
在上述方案中,所述根据N个所述深度图像确定与所述2D图像匹配的深度信息,包括以下至少之一:
从所述N个深度图像中选择出与所述2D图像匹配的一个所述深度图像;
根据所述N个深度图像,生成一个与所述2D图像匹配的一个所述深 度图像。
本申请实施例还提供了一种终端,所述终端包括:发射模块、第一获取模块和第一发送模块;其中,
所述发射模块,配置为发射N个不同的编码光;其中,N为大于1的正整数;
所述第一获取模块,配置为基于所述N个不同的编码光,获得一帧三维视频数据的N个深度图像;
所述第一发送模块,配置为将N个所述深度图像发送给移动边缘计算MEC服务器,其中,所述N个所述深度图像,配置为供所述MEC服务器从N个所述深度图像确定与所述一帧三维视频数据中二维2D图像匹配的深度图像。
在上述方案中,所述终端还包括:
第二获取模块,配置为获得N个不同的编码光序列;
所述发射模块,包括:
第一发射单元,配置为根据N个不同的所述编码光序列,发射N个不同形状和/或纹理的编码光。
在上述方案中,所述第二获取模块,包括:
第一生成单元,配置为基于不同的编码方式生成N个不同的所述编码光序列。
在上述方案中,所述第一发送模块,包括:
第一发送单元,配置为通过时分复用将N个所述深度图像发送给所述MEC服务器。
本申请实施例还提供了一种MEC服务器,所述服务器包括:第一接收模块、第一确定模块和第一建立模块;其中,
所述第一接收模块,配置为接收终端发送的一帧三维视频数据,其中,所述一帧三维视频数据包含有二维2D图像及N个深度图像;N为大于1 的正整数;
所述第一确定模块,配置为根据N个所述深度图像确定与所述2D图像匹配的深度图像;
所述第一建立模块,配置为基于所述2D图像及确定的所述深度图像,建立三维视频。
在上述方案中,所述第一确定模块,包括以下至少之一:
第一选择单元,配置为从所述N个深度图像中选择出与所述2D图像匹配的一个所述深度图像;
第二生成单元,配置为根据所述N个深度图像,生成一个与所述2D图像匹配的一个所述深度图像。
本申请实施例还提供了一种计算机存储介质,其上存储有计算机指令,该指令被处理器执行时实现本申请实施例所述的应用于终端的数据处理方法的步骤;或者,该指令被处理器执行时实现本申请实施例所述的应用于MEC服务器的数据处理方法的步骤。
本申请实施例还提供了一种终端,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现本申请实施例所述的应用于终端的数据处理方法的步骤。
本申请实施例还提供了一种MEC服务器,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现本申请实施例所述的应用于MEC服务器的数据处理方法的步骤。
本申请实施例提供一种数据处理方法、终端、服务器和存储介质,其中,首先,发射N个不同的编码光;其中,N为大于1的正整数;然后,基于所述N个不同的编码光,获得一帧三维视频数据的N个深度图像;最后,将N个所述深度图像发送给移动边缘计算MEC服务器,其中,所述N个所述深度图像,用于供所述MEC服务器从N个所述深度图像确定与所述三维视频数据中二维2D图像匹配的深度图像;如此,通过多个不同的编码 光采集深度图像,从而确定这一帧三维视频的深度值时,服务器通过结合多个深度图像中的深度值,从而使得到的深度值更加精确。
附图说明
图1为本申请实施例的数据处理方法应用的系统架构示意图;
图2为本申请实施例的数据处理方法的实现流程示意图;
图3为本申请实施例的数据处理方法的实现交互图;
图4A为本申请实施例的终端的一种组成结构示意图;
图4B为本申请实施例的终端的另一种组成结构示意图;
图4C为本申请实施例的终端的又一种组成结构示意图;
图4D为本申请实施例的终端的再一种组成结构示意图;
图5A为本申请实施例的服务器的组成结构示意图;
图5B为本申请实施例的服务器的又一组成结构示意图;
图6为本申请实施例的数据处理设备的硬件组成结构示意图。
具体实施方式
在对本申请实施例的技术方案进行详细说明之前,首先对本申请实施例的数据处理方法应用的系统架构进行简单说明。本申请实施例的数据处理方法应用于三维视频数据的相关业务,该业务例如是三维视频数据分享的业务,或者基于三维视频数据的直播业务等等。在这种情况下,由于三维视频数据的数据量较大,分别传输的深度值和二维视频数据在数据处理过程中需要较高的技术支持,因此需要移动通信网络具有较快的数据处理速率,以及较稳定的数据处理环境。
图1为本申请实施例的数据处理方法应用的系统架构示意图;如图1所示,系统可包括终端、基站、MEC服务器、业务处理服务器、核心网和互联网(Internet)等;MEC服务器与业务处理服务器之间通过核心网建立 高速通道以实现数据同步。
以图1所示的两个终端交互的应用场景为例,MEC服务器A为部署于靠近终端A(发送端)的MEC服务器,核心网A为终端A所在区域的核心网;相应的,MEC服务器B为部署于靠近终端B(接收端)的MEC服务器,核心网B为终端B所在区域的核心网;MEC服务器A和MEC服务器B可与业务处理服务器之间分别通过核心网A和核心网B建立高速通道以实现数据同步。
其中,终端A发送的三维视频数据处理到MEC服务器A后,由MEC服务器A通过核心网A将数据同步至业务处理服务器;再由MEC服务器B从业务处理服务器获取终端A发送的三维视频数据,并发送至终端B进行呈现。
这里,如果终端B与终端A通过同一个MEC服务器来实现传输,此时终端B和终端A直接通过一个MEC服务器实现三维视频数据的传输,不需要业务处理服务器的参与,这种方式称为本地回传方式。具体地,假设终端B与终端A通过MEC服务器A实现三维视频数据的传输,终端A发送的三维视频数据处理到MEC服务器A后,由MEC服务器A发送三维视频数据至终端B进行呈现。
这里,终端可基于网络情况、或者终端自身的配置情况、或者自身配置的算法选择接入4G网络的演进型基站(eNB),或者接入5G网络的下一代演进型基站(gNB),从而使得eNB通过长期演进(Long Term Evolution,LTE)接入网与MEC服务器连接,使得gNB通过下一代接入网(NG-RAN)与MEC服务器连接。
这里,MEC服务器部署于靠近终端或数据源头的网络边缘侧,所谓靠近终端或者靠近数据源头,不仅是逻辑位置上,还在地理位置上靠近终端或者靠近数据源头。区别于现有的移动通信网络中主要的业务处理服务器部署于几个大城市中,MEC服务器可在一个城市中部署多个。例如在某写 字楼中,用户较多,则可在该写字楼附近部署一个MEC服务器。
其中,MEC服务器作为具有融合网络、计算、存储、应用核心能力的边缘计算网关,为边缘计算提供包括设备域、网络域、数据域和应用域的平台支撑。其联接各类智能设备和传感器,就近提供智能联接和数据处理业务,让不同类型的应用和数据在MEC服务器中进行处理,实现业务实时、业务智能、数据聚合与互操作、安全与隐私保护等关键智能服务,有效提升业务的智能决策效率。
下面结合附图及具体实施例对本申请作进一步详细的说明。
本申请实施例提供了一种数据处理方法,应用于终端中,终端可以是例如手机、平板电脑等移动终端,也可以是电脑等类型的终端。图2为本申请实施例的数据处理方法的实现流程示意图;如图2所示,所述方法包括以下步骤:
步骤S201,发射N个不同的编码光。
这里,N为大于1的正整数。所述步骤S201可以理解为是,根据N个不同的所述编码光序列,发射N个不同形状和/或纹理的编码光。还可以是,基于不同的编码方式生成N个不同的所述编码光序列,比如,采用M进制编码、二灰度级编码、移相法编码等方式进行编码,生成N个连续序列的不同的编码光;M大于等于2。不同的编码光可以是:深度摄像头向采集对象投射的不同形状和/或纹理的结构光,该结构光可为非可见光,从而不干扰基于可见光成像的二维图像的采集。在本实施例中,所述不同编码光可为:不同波长的编码光,则此时,与二维图像对应的至少两个深度图像可以同时采集,也可以不同时采集。
步骤S202,基于所述N个不同的编码光,获得一帧三维视频数据的N个深度图像。
这里,终端采用N个不同的编码光,即采用不同形状和/或纹理的结构光,投射到采集对象之后,由于采集对象表面的凹凸及距离深度摄像头的 远近不同,使得深度摄像头基于结构光采集的深度图像是不同的结合投射的结构光的形状和/或纹理,与采集到结构光的图像中所呈现的形状和/或纹理的比对,可以确定出每一个结构光投射点的深度值,基于所述深度值构建深度图像。由于利用不同结构光采集了至少两个深度图像,终端可以根据结合至少两个深度图像得到采集目标同一个位置的至少两个深度值,从而提高了深度值的精确度。
步骤S203,将N个所述深度图像发送给MEC服务器。
这里,所述N个所述深度图像,用于供所述MEC服务器从N个所述深度图像确定与所述一帧三维视频数据中二维2D图像匹配的深度图像。当终端将深度图像发送给MEC服务器之后,MEC服务器可以从终端获得与二维图像对应的至少两个深度图像,可以结合这两个深度图像中深度值得到更加精准的深度值,或者,减少二维图像中部分对象在一帧所述三维视频数据中深度值缺失的现象。
在本实施例中,通过采用N个不同编码光,采集到一帧三维视频数据的N个深度图像,从而使一帧待采集画面对应多个深度图像,如此,根据多个深度图像,能够更加精确的确定该帧三维视频数据的深度值。
本实施例中,作为一种实施方式,所述获得一帧三维视频数据的N个深度图像,包括:所述终端从至少能够采集深度数据的采集组件获得N个深度图像,所述采集组件能够与至少一个终端建立通信链路以使对应终端获得所述三维视频数据。
本实施方式中,由于能够采集深度数据的采集组件相对比较昂贵,终端并不具备三维视频数据的采集功能,而是通过独立于终端的采集组件采集三维视频数据,再通过采集组件和终端中的通信组件建立通信链路,使得终端获得采集组件采集的三维视频数据。其中,所述采集组件具体可通过以下至少之一实现:深度摄像头、双目摄像头、3D结构光摄像模组、飞行时间(TOF,Time Of Flight)摄像模组。
这里,采集组件能够与至少一个终端建立通信链路以将采集得到的三维视频数据处理至所述至少一个终端,以使对应终端获得三维视频数据,这样能够实现一个采集组件采集的三维视频数据共享给至少一个终端,从而实现采集组件的共享。
作为另一种实施方式,终端自身具备三维视频数据的采集功能,可以理解,终端设置有至少能够采集深度数据的采集组件,例如设置有以下组件至少之一:深度摄像头、双目摄像头、3D结构光摄像模组、TOF摄像模组,以采集三维视频数据。
其中,获得的三维视频数据包括二维视频数据和深度数据;所述二维视频数据用于表征平面图像,例如可以是RGB数据;深度数据表征采集组件所针对的采集对象的表面与采集组件之间的距离。
本申请实施例又提供一种数据处理方法,图3为本申请实施例的数据处理方法的实现交互图,如图3所示,所述方法包括以下步骤:
步骤S301,终端根据N个不同的所述编码光序列,发射N个不同形状和/或纹理的编码光。
在一些实施例中,N个不同的编码光序列可包括:基于同一种编码方式生成的N个不同的编码光序列。
在另一些实施例中,所述步骤S301,还可以是,终端基于不同的编码方式生成N个不同的所述编码光序列。
步骤S302,终端基于所述N个不同的编码光,获得一帧三维视频数据的N个深度图像。
这里,所述步骤S302,还可以理解为,将所述N个连续序列的不同编码光投影到所述待采集的三维视频数据中的同一采集目标,得到用于描述所述同一采集目标的N个连续的深度图像。即,该同一采集目标对应着一张二维图像和N个深度图像。这样,由于利用不同结构光采集了至少两个 深度图像,终端可以根据结合至少两个深度图像得到采集目标同一个位置的至少两个深度值,从而提高了深度值的精确度。所述N个深度图像包含的深度信息有所不同,所以,终端根据N个深度图像能够得到更加精确的深度信息。
步骤S303,终端通过时分复用将N个所述深度图像发送给所述MEC服务器。
这里,所述步骤S303可以理解为是,将N个所述深度图像在不同的时间点发送给MEC服务器,以避免网络拥堵等情况。
步骤S304,服务器接收终端发送的一帧三维视频数据。
这里,所述一帧三维视频数据包含有二维图像及N个深度图像;N为大于1的正整数。
步骤S305,服务器根据N个所述深度图像确定与所述2D图像匹配的深度图像。
这里,所述步骤S305可以通过以下两种方式,方式一:
服务器从所述N个深度图像中选择出与所述2D图像匹配的一个所述深度图像。即是,服务器从所述N个深度图像中选择出一个包含的深度信息最为全面或精准的一个或者多个深度图像作为与2D图像匹配的一个所述深度图像。在本实施例中,服务器从所述N个深度图像中选择出一个包含的深度信息最为全面或精准的深度图像,可以通过以下两种方法实现:
方法一是:服务器通过逐个判断N个深度图像各自对应的深度值是否有缺失,或者说深度值是否包含足够的用于与2D图像匹配的特征点,如果有一个深度图像所包含的深度值是完整的,那么可以将该深度图像作为与2D图像匹配的一个所述深度图像;比如,一共有10个深度图像,假如这10个深度图像中第4个深度图像包含的深度值不存在缺失,那么将第4个深度图像作为与所述2D图像匹配的深度图像。
方法二是:服务器通过逐个判断N个深度图像各自对应的深度值是否 有异常值,比如过大、过小或者是与预先设定的深度值相差甚远等,那么将存在异常值的深度图像排除,仅保留不存在异常值的深度图像,从这些深度图像中选择一个深度值包含足够的用于与2D图像匹配的特征点的深度图像作为与2D图像匹配的一个所述深度图像。
方式二:根据所述N个深度图像,生成一个与所述2D图像匹配的一个所述深度图像根据所述N个深度图像。即是,服务器件所述N个深度图像进行结合,得到一个包含N个深度图像对应的深度值的一个深度图像,作为与所述2D图像匹配的深度图像。比如,将上述10个深度图像结合为一个深度图像,这样这个结合得到的深度图像包含的深度信息便是非常丰富的,那么也就提升了深度信息的精确度。
步骤S306,服务器基于所述2D图像及确定的所述深度图像,建立三维视频。
这里,MEC服务器可以从终端获得与二维图像对应的至少两个深度图像,可以结合这两个深度图像中深度值得到更加精准的深度值,或者,减少二维图像中部分对象在一帧所述三维视频数据中深度值缺失的现象。
在本实施例中,终端将一帧三维视频数据的多个深度图像发送给MEC服务器,从而MEC服务器可以结合这多个深度图像中深度值得到更加精准的深度值。
本申请实施例提供一种数据处理方法,采用N个连续序列的不同编码光进行投影到待采集的三维视频数据上,得到该三维视频数据对应的N个连续的深度图像。然后接收端根据接收到N个连续的深度图像来识别每个编码点。然后,根据这些编码点确定N个连续的深度图像对应的深度值。其中,投射的N个连续序列的不同编码光的编码方式可以是二进制码(最常用)、N进制码(N=5、8)、二灰度级编码和移相法编码等方案。
在还有一些实施例中,所述不同编码光可为:相同波长的编码光;则 此时,与二维图像对应的至少两个深度图像可以通过时分复用的方式,在同一帧三维视频数据采集过程中的不同时刻采集;从而实现深度图像的时分复用。
在其他实施例中,利用不同发射不同的编码光,编码光遇到采集目标之后被反射形成反射光,可以基于发射光的发射时间及反射光的反射时间、及光传播速度,可以求解出深度值。此时,不同形状和/或纹理的编码光,会使得光传输路径不一致,如此在某一个编码光被遮挡了,另一个编码光发射之后可能会形成反射光返回,从而采集到深度值;或者,两个编码光都反射形成了反射光并达到了深度摄像头的接收器,也可以分别生成两个深度图像。
若采用不同波长的编码光,则可以同步采集两个深度图像;若利用相同波长的编码光,则可采用时分复用进行处理。
在本实施例中,通过时分复用将利用不同的编码光下,获得的深度信息,分时传输到MEC,供MEC获得精确的深度信息。
为实现本申请实施例终端侧的方法,本申请实施例还提供了一种终端。图4A为本申请实施例的终端的一种组成结构示意图;如图4A所示,所述终端40包括:发射模块41、第一获取模块42和第一发送模块43;其中,
所述发射模块41,配置为发射N个不同的编码光;其中,N为大于1的正整数;
所述第一获取模块42,配置为基于所述N个不同的编码光,获得一帧三维视频数据的N个深度图像;
所述第一发送模块43,配置为将N个所述深度图像发送给移动边缘计算MEC服务器,其中,所述N个所述深度图像,配置为供所述MEC服务器从N个所述深度图像确定与所述一帧三维视频数据中二维2D图像匹配的深度图像。
在上述方案中,如图4B所示,所述终端40还包括:
第二获取模块44,配置为获得N个不同的编码光序列;
所述发射模块41,包括:
第一发射单元401,配置为根据N个不同的所述编码光序列,发射N个不同形状和/或纹理的编码光。
在上述方案中,如图4C所示,所述第二获取模块44,包括:
第一生成单元402,配置为基于不同的编码方式生成N个不同的所述编码光序列。
在上述方案中,如图4D所示,所述第一发送模块43,包括:
第一发送单元403,配置为通过时分复用将N个所述深度图像发送给所述MEC服务器。
本申请实施例中,所述终端中的第一发送模块43,在实际应用中可由所述终端中的处理器,比如中央处理器(CPU,Central Processing Unit)、数字信号处理器(DSP,Digital Signal Processor)、微控制单元(MCU,Microcontroller Unit)或可编程门阵列(FPGA,Field-Programmable Gate Array)等实现;所述终端中的第一发送模块43,在实际应用中可通过通信模组(包含:基础通信套件、操作系统、通信模块、标准化接口和协议等)及收发天线实现;所述终端中的发射模块41,在实际应用中可通过立体摄像头、双目摄像头或结构光摄像头实现,或者可通过通信模组(包含:基础通信套件、操作系统、通信模块、标准化接口和协议等)及收发天线实现;所述终端中的第一获取模块42,在实际应用中可由处理器比如CPU、DSP、MCU或FPGA等结合通信模组实现。
需要说明的是:上述实施例提供的终端在进行数据处理时,仅以上述各程序模块的划分进行举例说明,实际应用中,可以根据需要而将上述处理分配由不同的程序模块完成,即将终端的内部结构划分成不同的程序模块,以完成以上描述的全部或者部分处理。另外,上述实施例提供的终端 与数据处理方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
相应地,为实现本申请实施例服务器侧的方法,本申请实施例还提供了一种服务器,具体为MEC服务器。图5A为本申请实施例的服务器的组成结构示意图;如图5A所示,所述服务器50包括第一接收模块51、第一确定模块52和第一建立模块53;其中,
所述第一接收模块51,配置为接收终端发送的一帧三维视频数据,其中,所述一帧三维视频数据包含有二维2D图像及N个深度图像;N为大于1的正整数;
所述第一确定模块52,配置为根据N个所述深度图像确定与所述2D图像匹配的深度图像;
所述第一建立模块53,配置为基于所述2D图像及确定的所述深度图像,建立三维视频。
在上述方案中,如图5B所示,所述第一确定模块52,包括以下至少之一:
第一选择单元501,配置为从所述N个深度图像中选择出与所述2D图像匹配的一个所述深度图像;
第二生成单元502,配置为根据所述N个深度图像,生成一个与所述2D图像匹配的一个所述深度图像。
本申请实施例中,所述服务器中的第二数据处理单元52,在实际应用中可由所述服务器中的处理器,比如CPU、DSP、MCU或FPGA等实现;所述服务器中的第二通信单元51,在实际应用中可通过通信模组(包含:基础通信套件、操作系统、通信模块、标准化接口和协议等)及收发天线实现。
需要说明的是:上述实施例提供的服务器在进行数据处理时,仅以上述各程序模块的划分进行举例说明,实际应用中,可以根据需要而将上述 传输分配由不同的程序模块完成,即将服务器的内部结构划分成不同的程序模块,以完成以上描述的全部或者部分处理。另外,上述实施例提供的服务器与数据处理方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
基于上述设备的硬件实现,本申请实施例还提供了一种数据处理设备,图6为本申请实施例的数据处理设备的硬件组成结构示意图,如图6所示,数据处理装置60,包括存储器61、处理器62及存储在存储器上并可在处理器上运行的计算机程序。作为第一种实施方式,数据处理设备为终端时,位于终端的处理器执行所述程序时实现:发射N个不同的编码光;其中,N为大于1的正整数;基于所述N个不同的编码光,获得一帧三维视频数据的N个深度图像;将N个所述深度图像发送给移动边缘计算MEC服务器,其中,所述N个所述深度图像,用于供所述MEC服务器从N个所述深度图像确定与所述一帧三维视频数据中二维2D图像匹配的深度图像。
在一实施例中,位于终端的处理器执行所述程序时实现:获得N个不同的编码光序列;根据N个不同的所述编码光序列,发射N个不同形状和/或纹理的编码光。
在一实施例中,位于终端的处理器执行所述程序时实现:基于不同的编码方式生成N个不同的所述编码光序列。
在一实施例中,位于终端的处理器执行所述程序时实现:通过时分复用将N个所述深度图像发送给所述MEC服务器。
作为第二种实施方式,数据处理设备为服务器时,位于服务器的处理器执行所述程序时实现:接收终端发送的一帧三维视频数据,其中,所述一帧三维视频数据包含有二维2D图像及N个深度图像;N为大于1的正整数;根据N个所述深度图像确定与所述2D图像匹配的深度图像;基于所述2D图像及确定的所述深度图像,建立三维视频。
在一实施例中,位于服务器的处理器执行所述程序时实现:从所述N 个深度图像中选择出与所述2D图像匹配的一个所述深度图像;根据所述N个深度图像,生成一个与所述2D图像匹配的一个所述深度图像。
可以理解,数据处理设备(终端或服务器)还包括通信接口63;数据处理设备(终端或服务器)中的各个组件通过总线系统耦合在一起。可理解,总线系统配置为实现这些组件之间的连接通信。总线系统除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。
本申请实施例还提供了一种计算机存储介质,其上存储有计算机指令,该指令被处理器执行时实现本申请实施例所述的应用于终端的数据处理方法的步骤;或者,该指令被处理器执行时实现本申请实施例所述的应用于MEC服务器的数据处理方法的步骤。
本申请实施例还提供了一种终端,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现本申请实施例所述的应用于终端的数据处理方法的步骤。
本申请实施例还提供了一种MEC服务器,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现本申请实施例所述的应用于MEC服务器的数据处理方法的步骤。
在本申请所提供的几个实施例中,应该理解到,所揭露的方法和智能设备,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分 或全部单元来实现本实施例方案的目的。
另外,在本申请各实施例中的各功能单元可以全部集成在一个第二处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本申请上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者手机等)执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
需要说明的是:本申请实施例所记载的技术方案之间,在不冲突的情况下,可以任意组合。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。
工业实用性
本申请实施例中,首先,发射N个不同的编码光;其中,N为大于1 的正整数;然后,基于所述N个不同的编码光,获得一帧三维视频数据的N个深度图像;最后,将N个所述深度图像发送给移动边缘计算MEC服务器,其中,所述N个所述深度图像,配置为供所述MEC服务器从N个所述深度图像确定与所述三维视频数据中二维2D图像匹配的深度图像;如此,通过多个不同的编码光采集深度图像,从而确定这一帧三维视频的深度值时,服务器通过结合多个深度图像中的深度值,从而使得到的深度值更加精确。

Claims (15)

  1. 一种数据处理方法,其中,应用于终端,所述方法包括:
    发射N个不同的编码光;其中,N为大于1的正整数;
    基于所述N个不同的编码光,获得一帧三维视频数据的N个深度图像;
    将N个所述深度图像发送给移动边缘计算MEC服务器,其中,所述N个所述深度图像,用于供所述MEC服务器从N个所述深度图像确定与所述一帧三维视频数据中二维2D图像匹配的深度图像。
  2. 根据权利要求1所述的方法,其中,所述方法还包括:
    获得N个不同的编码光序列;
    对应地,所述发射N个不同的编码光,包括:
    根据N个不同的所述编码光序列,发射N个不同形状和/或纹理的编码光。
  3. 根据权利要求2所述的方法,其中,所述获得N个编码光序列,包括:
    基于不同的编码方式生成N个不同的所述编码光序列。
  4. 根据权利要求1所述的方法,其中,所述将N个所述深度图像发送给移动边缘计算MEC服务器,包括:
    通过时分复用将N个所述深度图像发送给所述MEC服务器。
  5. 一种数据处理方法,其中,应用于MEC服务器,所述方法包括:
    接收终端发送的一帧三维视频数据,其中,所述一帧三维视频数据包含有二维2D图像及N个深度图像;N为大于1的正整数;
    根据N个所述深度图像确定与所述2D图像匹配的深度图像;
    基于所述2D图像及确定的所述深度图像,建立三维视频。
  6. 根据权利要求5所述的方法,其中,所述根据N个所述深度图像确定与所述2D图像匹配的深度信息,包括以下至少之一:
    从所述N个深度图像中选择出与所述2D图像匹配的一个所述深度图像;
    根据所述N个深度图像,生成一个与所述2D图像匹配的一个所述深度图像。
  7. 一种终端,其中,所述终端包括:发射模块、第一获取模块和第一发送模块;其中,
    所述发射模块,配置为发射N个不同的编码光;其中,N为大于1的正整数;
    所述第一获取模块,配置为基于所述N个不同的编码光,获得一帧三维视频数据的N个深度图像;
    所述第一发送模块,配置为将N个所述深度图像发送给移动边缘计算MEC服务器,其中,所述N个所述深度图像,配置为供所述MEC服务器从N个所述深度图像确定与所述一帧三维视频数据中二维2D图像匹配的深度图像。
  8. 根据权利要求7所述的终端,其中,所述终端还包括:
    第二获取模块,配置为获得N个不同的编码光序列;
    所述发射模块,包括:
    第一发射单元,配置为根据N个不同的所述编码光序列,发射N个不同形状和/或纹理的编码光。
  9. 根据权利要求8所述的终端,其中,所述第二获取模块,包括:
    第一生成单元,配置为基于不同的编码方式生成N个不同的所述编码光序列。
  10. 根据权利要求7所述的终端,其中,所述第一发送模块,包括:
    第一发送单元,配置为通过时分复用将N个所述深度图像发送给所述MEC服务器。
  11. 一种MEC服务器,其中,所述服务器包括:第一接收模块、第一 确定模块和第一建立模块;其中,
    所述第一接收模块,配置为接收终端发送的一帧三维视频数据,其中,所述一帧三维视频数据包含有二维2D图像及N个深度图像;N为大于1的正整数;
    所述第一确定模块,配置为根据N个所述深度图像确定与所述2D图像匹配的深度图像;
    所述第一建立模块,配置为基于所述2D图像及确定的所述深度图像,建立三维视频。
  12. 根据权利要求11所述的服务器,其中,所述第一确定模块,包括以下至少之一:
    第一选择单元,配置为从所述N个深度图像中选择出与所述2D图像匹配的一个所述深度图像;
    第二生成单元,配置为根据所述N个深度图像,生成一个与所述2D图像匹配的一个所述深度图像。
  13. 一种计算机存储介质,其上存储有计算机指令,其中,该指令被处理器执行时实现权利要求1至4任一项所述数据处理方法的步骤;或者,该指令被处理器执行时实现权利要求5或6所述数据处理方法的步骤。
  14. 一种终端,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述程序时实现权利要求1至4任一项所述数据处理方法的步骤。
  15. 一种MEC服务器,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述程序时实现权利要求5或6所述数据处理方法的步骤。
PCT/CN2019/100650 2018-09-30 2019-08-14 数据处理方法、终端、服务器和存储介质 WO2020063172A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811162631.8A CN109274953A (zh) 2018-09-30 2018-09-30 一种数据处理方法、终端、服务器和存储介质
CN201811162631.8 2018-09-30

Publications (1)

Publication Number Publication Date
WO2020063172A1 true WO2020063172A1 (zh) 2020-04-02

Family

ID=65195043

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/100650 WO2020063172A1 (zh) 2018-09-30 2019-08-14 数据处理方法、终端、服务器和存储介质

Country Status (2)

Country Link
CN (1) CN109274953A (zh)
WO (1) WO2020063172A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140293009A1 (en) * 2013-04-01 2014-10-02 Canon Kabushiki Kaisha Information processing apparatus and information processing method
CN108521436A (zh) * 2018-02-01 2018-09-11 上海交通大学 基于终端计算存储的移动虚拟现实传输方法及系统
CN108564614A (zh) * 2018-04-03 2018-09-21 Oppo广东移动通信有限公司 深度获取方法和装置、计算机可读存储介质和计算机设备
CN108600728A (zh) * 2018-05-10 2018-09-28 Oppo广东移动通信有限公司 一种数据传输方法及终端、计算机存储介质
CN108594453A (zh) * 2018-03-23 2018-09-28 深圳奥比中光科技有限公司 一种结构光投影模组和深度相机

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2670214A1 (en) * 2006-11-21 2008-05-29 Mantisvision Ltd. 3d geometric modeling and 3d video content creation
WO2018033917A1 (en) * 2016-08-18 2018-02-22 Ramot At Tel-Aviv University Ltd. Structured light projector
CN106580506B (zh) * 2016-10-25 2018-11-23 成都频泰医疗设备有限公司 分时三维扫描系统和方法
CN108318706A (zh) * 2017-12-29 2018-07-24 维沃移动通信有限公司 移动物体的测速方法和装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140293009A1 (en) * 2013-04-01 2014-10-02 Canon Kabushiki Kaisha Information processing apparatus and information processing method
CN108521436A (zh) * 2018-02-01 2018-09-11 上海交通大学 基于终端计算存储的移动虚拟现实传输方法及系统
CN108594453A (zh) * 2018-03-23 2018-09-28 深圳奥比中光科技有限公司 一种结构光投影模组和深度相机
CN108564614A (zh) * 2018-04-03 2018-09-21 Oppo广东移动通信有限公司 深度获取方法和装置、计算机可读存储介质和计算机设备
CN108600728A (zh) * 2018-05-10 2018-09-28 Oppo广东移动通信有限公司 一种数据传输方法及终端、计算机存储介质

Also Published As

Publication number Publication date
CN109274953A (zh) 2019-01-25

Similar Documents

Publication Publication Date Title
JP2023145586A (ja) 三次元モデル送信方法、三次元モデル受信方法、三次元モデル送信装置及び三次元モデル受信装置
CN112672132B (zh) 数据处理方法及装置、电子设备及存储介质
CN108780153A (zh) 同步主动照明相机
WO2020063179A1 (zh) 数据处理方法及装置、电子设备及存储介质
US20220191295A1 (en) Synchronizing multiple user devices in an immersive media environment using time-of-flight light patterns
JP2021531688A (ja) データ処理方法及び装置、電子機器並びに記憶媒体
WO2018200337A1 (en) System and method for simulating light transport between virtual and real objects in mixed reality
KR20230085767A (ko) 성능 기반의 스플릿 컴퓨팅 제공 방법 및 장치
CN109410319B (zh) 一种数据处理方法、服务器和计算机存储介质
CN102270354A (zh) 基于对等运算集群的分布式渲染方法及其渲染系统
WO2020063171A1 (zh) 数据传输方法、终端、服务器和存储介质
WO2020063172A1 (zh) 数据处理方法、终端、服务器和存储介质
WO2020063170A1 (zh) 数据处理方法、终端、服务器和存储介质
WO2020063168A1 (zh) 一种数据处理方法、终端、服务器和计算机存储介质
CN109413405B (zh) 一种数据处理方法、终端、服务器和计算机存储介质
CN109389674B (zh) 数据处理方法及装置、mec服务器及存储介质
Sharma et al. UAV Immersive Video Streaming: A Comprehensive Survey, Benchmarking, and Open Challenges
US12137393B2 (en) Method for the transmission of a frame by an access point of a wireless local area network
CN109147043B (zh) 一种数据处理方法、服务器及计算机存储介质
CN109389675B (zh) 数据处理方法及装置、终端及存储介质
WO2020062919A1 (zh) 一种数据处理方法、mec服务器、终端设备
CN108737807B (zh) 一种数据处理方法、终端、服务器和计算机存储介质
CN109299323B (zh) 一种数据处理方法、终端、服务器和计算机存储介质
US12069138B2 (en) Preserving transmission properties of real-time scenes in an environment when an increasing number of users join a session
CN112152717A (zh) 信息推送方法、装置、终端、信息推送设备及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19866941

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19866941

Country of ref document: EP

Kind code of ref document: A1