CN114302125B - Image processing method and device and computer readable storage medium - Google Patents
Image processing method and device and computer readable storage medium Download PDFInfo
- Publication number
- CN114302125B CN114302125B CN202111665600.6A CN202111665600A CN114302125B CN 114302125 B CN114302125 B CN 114302125B CN 202111665600 A CN202111665600 A CN 202111665600A CN 114302125 B CN114302125 B CN 114302125B
- Authority
- CN
- China
- Prior art keywords
- video frame
- image rendering
- cloud
- side equipment
- processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 32
- 238000009877 rendering Methods 0.000 claims abstract description 169
- 238000000034 method Methods 0.000 claims description 96
- 230000008569 process Effects 0.000 claims description 92
- 238000004364 calculation method Methods 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 12
- 238000001514 detection method Methods 0.000 claims description 10
- 230000000903 blocking effect Effects 0.000 abstract description 3
- 238000004891 communication Methods 0.000 description 18
- 230000008901 benefit Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000001052 transient effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Landscapes
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
An image processing method and device, computer readable storage medium, the image processing method includes: when the ith video frame is displayed, detecting triggering operation of a user; responding to the triggering operation, and sending first indication information to cloud side equipment when detecting that the network signal quality between the cloud side equipment and the cloud side equipment meets the video frame reporting requirement, wherein the first indication information is used for indicating the cloud side equipment to conduct image rendering processing on an i+1th video frame; and receiving and displaying the (i+1) th video frame which is sent by the cloud side equipment and is subjected to image rendering processing. According to the scheme, the MTP time delay of the cloud VR device can be reduced, and the occurrence of display blocking is reduced.
Description
Technical Field
The present invention relates to the field of VR technologies, and in particular, to an image processing method and apparatus, and a computer readable storage medium.
Background
Traditional Virtual Reality (VR) devices have heavy heads and high power consumption and heat due to large calculation amount, which affects user experience.
In the prior art, a brand new mode of cloud cooperation at the lower end of a VR scene exists, video frames are rendered, fluidized and the like by cloud side equipment, processing results are sent to a cloud VR device, and the cloud VR device displays the processing results, so that the weight and the power consumption of the cloud VR device are reduced to a certain extent.
However, as the cloud-side device needs a certain time to process the video frame, and the longer time delay from motion to imaging (Motion To Photons, MTP) exists in the transmission and encoding and decoding processes of the cellular network and the wireless local area network, the MTP time delay of the cloud VR device is higher, and the phenomenon of jamming exists is displayed.
Disclosure of Invention
The embodiment of the invention solves the technical problems that the MTP time delay of a cloud VR device is higher and the phenomenon of blocking exists.
In order to solve the above technical problems, an embodiment of the present invention provides an image processing method, including: when the ith video frame is displayed, detecting triggering operation of a user; responding to the triggering operation, and sending first indication information to cloud side equipment when detecting that the network signal quality between the cloud side equipment and the cloud side equipment meets the video frame reporting requirement, wherein the first indication information is used for indicating the cloud side equipment to conduct image rendering processing on an i+1th video frame; and receiving and displaying the (i+1) th video frame which is sent by the cloud side equipment and is subjected to image rendering processing.
Optionally, the image processing method further includes: when the network signal quality between the cloud side equipment and the cloud side equipment is detected to not meet the video frame reporting requirement, performing image rendering processing on the (i+1) th video frame, and displaying the (i+1) th video frame subjected to the image rendering processing.
Optionally, before performing image rendering processing on the (i+1) th video frame, the method further includes: it is determined that the local load is not greater than a first threshold.
Optionally, the image processing method further includes: and if the local load is determined to be larger than the first threshold value, displaying the (i+1) th video frame in a distorting mode.
Optionally, the image processing method further includes: when the network signal quality between the cloud side equipment and the cloud side equipment does not meet the video frame reporting requirement, performing image rendering processing on an ith video frame in a first video frame set; if the (i+1) th video frame is positioned in the second video frame set, sending second indication information to the cloud side device, wherein the second indication information is used for indicating the cloud side device to perform image rendering processing on the (i+1) th video frame and M-1 video frames after the (i+1) th video frame; receiving and displaying an i+1st frame subjected to image rendering processing and M-1 video frames after the i+1st frame subjected to image rendering processing, wherein the i+1st frame is sent by the cloud side equipment; if the (i+1) th video frame is positioned in the first video frame set, performing image rendering processing on the (i+1) th video frame and displaying the (i+1) th video frame; the first set of video frames includes video frames that locally perform image rendering processing, and the second set of video frames includes video frames that perform image rendering processing by the cloud-side device.
Optionally, the video frames in the first video frame set and the second video frame set are pre-allocated.
Optionally, according to a preset adjustment condition, adjusting the number of video frames in the first video frame set and the second video frame set; the preset adjusting conditions comprise at least one of the following: and the channel quality, the network service quality, the flow tariff information and the local calculation power are arranged between the cloud side equipment and the cloud side equipment.
The embodiment of the invention also provides another image processing method, which comprises the following steps: when the ith video frame is displayed, detecting triggering operation of a user; responding to the triggering operation, determining that the i+1th video frame is subjected to first image rendering processing locally according to flow tariff information and local computing power, determining that cloud side equipment is subjected to second image rendering processing on the i+1th video frame, and sending third indication information indicating that the i+1th video frame is subjected to second image rendering processing to the cloud side equipment; performing first image rendering processing on the (i+1) th video frame, and receiving the (i+1) th video frame subjected to second image rendering processing, which is sent by the cloud side equipment; combining the (i+1) th video frame subjected to the first image rendering process with the (i+1) th video frame subjected to the second image rendering process, and displaying the combined (i+1) th video frame; the first image rendering process and the second image rendering process are one of a rich detail model process and a basic detail model process.
Optionally, the determining that the i+1th video frame is subjected to first image rendering processing locally, and the determining that the cloud side device is subjected to second image rendering processing on the i+1th video frame includes: if the flow tariff corresponding to the flow tariff information does not reach a preset tariff value and the local calculation force is lower than a preset threshold value, determining that the first image rendering process is rich detail model processing and the second image rendering process is rich detail model processing; if the flow tariff corresponding to the flow tariff information does not reach a preset tariff value and the local calculation force is higher than a preset threshold value, determining that the first image rendering process is basic detail model process and the second image rendering process is rich detail model process; if the flow tariff corresponding to the flow tariff information reaches the preset tariff value and the local calculation force is lower than the preset threshold value, determining that the first image rendering process is rich detail model process and the second image rendering process is basic detail model process; and if the flow tariff corresponding to the flow tariff information reaches the preset tariff value and the local calculation force is higher than the preset threshold value, determining that the first image rendering process is basic detail model processing and the second image rendering process is basic detail model processing.
Optionally, the image processing method further includes: and if the combined (i+1) th video frame is not received, displaying the (i+1) th video frame in a twisting way.
The embodiment of the invention also provides an image detection device, which comprises: the first detection unit is used for detecting triggering operation of a user when the ith video frame is displayed; the first sending unit is used for responding to the triggering operation, and sending first indication information to the cloud side equipment when the network signal quality between the cloud side equipment and the cloud side equipment is detected to meet the video frame reporting requirement, wherein the first indication information is used for indicating the cloud side equipment to conduct image rendering processing on the (i+1) th video frame; the first receiving unit is used for receiving and displaying the (i+1) th video frame which is sent by the cloud side equipment and is subjected to image rendering processing.
The embodiment of the invention also provides another image detection device, which comprises: the second detection unit is used for detecting triggering operation of a user when the ith video frame is displayed; the first determining unit is used for responding to the triggering operation, determining that the i+1th video frame is subjected to first image rendering processing locally according to flow tariff information and local computing force, and determining that cloud side equipment is subjected to second image rendering processing on the i+1th video frame; a second sending unit, configured to send third indication information indicating that the i+1th video frame is subjected to second image rendering processing to the cloud side device; a first image processing unit, configured to perform a first image rendering process on the (i+1) th video frame; the second receiving unit is used for receiving the (i+1) th video frame which is sent by the cloud side equipment and is subjected to second image rendering processing; a merging unit for merging the i+1th video frame subjected to the first image rendering process with the i+1th video frame subjected to the second image rendering process, and displaying the merged i+1th video frame; the first image rendering process and the second image rendering process are one of a rich detail model process and a basic detail model process.
The embodiment of the invention also provides a computer readable storage medium, which is a non-volatile storage medium or a non-transient storage medium, and a computer program is stored on the computer readable storage medium, and the computer program is executed by a processor to execute the steps of any one of the image processing methods.
The embodiment of the invention also provides another image processing device, which comprises a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and the processor executes the steps of any one of the image processing methods when running the computer program.
Compared with the prior art, the technical scheme of the embodiment of the invention has the following beneficial effects:
When the triggering operation of the user is detected, if the network signal quality between the cloud side equipment and the cloud side equipment is detected to be better, the cloud side equipment performs image rendering processing on the (i+1) th video frame and receives the (i+1) th video frame after the image rendering processing. The video frames are processed through the cloud side equipment, the calculation force of the cloud side equipment is fully utilized, and the calculation force requirement on the local cloud VR device is greatly reduced while the MTP time delay of the cloud VR device is reduced.
Further, if the network signal quality between the cloud-side device and the cloud-side device is detected to be poor, the i+1th video frame is subjected to image rendering processing through local computing force, so that the MTP time delay of the cloud VR device can be greatly reduced.
In addition, the number of video frames in the first video frame set and the second video frame set is dynamically adjusted, when the local computing power which can be used by the cloud VR device is strong, the number of video frames in the first video frame set can be properly increased, the number of video frames in the second video frame set is correspondingly reduced, and the MTP time delay of the cloud VR device is further reduced. When the local computing power that can be used by the cloud VR device is weak, the number of video frames of the first video frame set can be properly reduced, and the number of video frames of the second video frame set can be correspondingly increased, so that the image quality of the display picture of the cloud VR device is improved to a certain extent.
When the triggering operation of the user is detected, the image rendering processing mode of the cloud VR device and cloud side equipment on the (i+1) th video frame is determined according to the tariff information and the local calculation force which can be used by the cloud VR device, so that the image quality of a display picture can be improved flexibly according to the tariff information and the local calculation force which can be used by the cloud VR device.
Drawings
FIG. 1 is a flow chart of an image processing method in an embodiment of the invention;
FIG. 2 is a schematic diagram of a video frame distribution in an embodiment of the invention;
FIG. 3 is a flow chart of another image processing method in an embodiment of the invention;
Fig. 4 is a schematic structural view of an image processing apparatus in an embodiment of the present invention;
fig. 5 is a schematic diagram of a structure of another image processing apparatus in an embodiment of the present invention.
Detailed Description
As described in the background art, in the prior art, the MTP delay of the cloud VR device is high, which indicates that a stuck phenomenon exists.
In the embodiment of the invention, when the triggering operation of the user is detected, if the network signal quality between the cloud side equipment and the cloud side equipment is detected to be better, the cloud side equipment can perform image rendering processing on the i+1th video frame and receive the i+1th video frame after the image rendering processing. The video frames are processed through the cloud side equipment, the calculation force of the cloud side equipment is fully utilized, and the calculation force requirement on the local cloud VR device is greatly reduced while the MTP time delay of the cloud VR device is reduced.
In order to make the above objects, features and advantages of the present invention more comprehensible, embodiments accompanied with figures are described in detail below.
An embodiment of the present invention provides an image processing method, and detailed description is given below through specific steps with reference to fig. 1.
In a specific implementation, the image processing method may be executed by a control chip (such as a CPU chip) in the cloud VR device, or a chip module including the control chip in the cloud VR device.
In step S101, when the ith video frame is displayed, a trigger operation by the user is detected.
In implementations, whether a user wearing the cloud VR device triggers an operation of a video frame update may be detected in real time. The trigger operations corresponding to the user may be the same or different for different types of cloud VR devices.
For example, for a certain type of cloud VR device, when a user click operation is detected, determining that a trigger operation of the user is detected; or when the user shake from side to side is detected, determining that the triggering operation of the user is detected.
In a specific application, a specific rule for detecting whether a user has a trigger operation may be set correspondingly according to different cloud VR devices, which is not specifically described in the embodiment of the present invention.
In the embodiment of the present invention, when the cloud VR device displays the i-th video frame, if a trigger operation of the user is detected, the following step S102 may be executed.
Step S102, responding to the triggering operation, and sending first indication information to the cloud side equipment when the network signal quality between the cloud side equipment and the cloud side equipment is detected to meet the video frame reporting requirement.
In a specific implementation, the quality of a communication network between the cloud VR device and cloud-side equipment can be detected and acquired in real time. Communication network quality may be characterized by any one or more of Block Error Rate (BLER), signal-to-interference-plus-noise ratio (Signal to Interference plus Noise Rate, SINR), network quality of service (quality of service, qoS), etc. of the air interface channel. It will be appreciated that the quality of the communication network may also be characterized by other parameters that characterize the quality of the communication.
In the embodiment of the invention, when the BLER of the air interface channel is smaller than a preset block error rate threshold, the SINR is larger than a preset signal to interference plus noise ratio threshold, and the network service quality can meet the requirements of throughput, time delay, packet loss rate and the like, the communication network quality between the cloud VR device and cloud side equipment can be determined to be larger than a preset quality threshold. When the communication network quality between the cloud VR device and the cloud side equipment is determined to be greater than a preset quality threshold, the requirement of reporting video frames can be determined to be met; otherwise, when the communication network quality between the cloud VR device and the cloud side device is smaller than the preset quality threshold, the requirement of reporting the video frames is determined not to be met.
For example, when the BLER of the air interface channel is smaller than 1%, the SINR is larger than-3 dB, and the indexes such as throughput, delay, packet loss rate and the like of the network service quality are all normal, it can be determined that the quality of the communication network between the cloud VR device and the cloud side device is better.
Accordingly, when the BLER of the air interface channel is greater than or equal to a preset block error rate threshold, the SINR is less than or equal to a preset signal to interference plus noise ratio threshold, and the network service quality cannot meet any requirement of throughput, delay, packet loss rate, and the like, it can be determined that the communication network quality between the cloud VR device and the cloud side device is low by a preset quality threshold, that is, the communication network quality between the cloud VR device and the cloud side device is generally or poorly.
For example, when the BLER of the air interface channel is more than or equal to 1%, the SINR is less than or equal to-3 dB, and the network service quality cannot meet the indexes of the time delay and the packet loss rate, the communication network quality of the cloud VR device and the cloud side equipment for finding time can be determined to be general or poor.
The above example is only one specific example of determining the quality of the communication network between the cloud VR device and the cloud-side apparatus. In implementations, the BLER threshold and SINR threshold may each be set to other values, and are not limited to the examples described above.
In the embodiment of the invention, when the cloud VR device detects that the network signal quality between the cloud VR device and the cloud side equipment meets the video frame reporting requirement, the cloud VR device can send first indication information to the cloud side equipment, and the first information only can be used for indicating the cloud side equipment to conduct image rendering processing on the (i+1) th video frame.
That is, when the cloud VR apparatus detects that the network signal quality with the cloud-side device is superior, the cloud-side device may be instructed to perform the image rendering process on the i+1th video frame.
And after the cloud side equipment receives the first indication information, the cloud side equipment can conduct image rendering processing on the (i+1) th video frame. The specific algorithm and process of the cloud side device for performing image rendering processing on the i+1th video frame can refer to the prior art, and the embodiment of the present invention is not described in detail.
Step S103, receiving and displaying the i+1th video frame subjected to the image rendering processing sent by the cloud side device.
In a specific implementation, the cloud VR device may receive the i+1th video frame after the image rendering process sent by the cloud side device, and display the i+1th video frame after the image rendering process.
That is, when it is detected that the quality of the communication network between the cloud VR device and the cloud-side apparatus is good, all video frames may be rendered only by the cloud-side apparatus, and the cloud VR device may not render the video frames.
As can be seen, when the network signal quality with the cloud side device is detected to be better, the cloud side device performs image rendering processing on the i+1th video frame and receives the i+1th video frame after the image rendering processing. Because the network signal quality between the cloud-side equipment and the cloud-side equipment is better, the communication delay between the cloud VR device and the cloud-side equipment is lower, the advantage of strong computing capacity of the cloud-side equipment is fully utilized to render the video frames, the computing load and the power consumption of the cloud VR device can be reduced while the frame rate, the definition and the perceptibility are met, and the computing power requirement on the local cloud VR device is reduced.
In a specific implementation, when detecting that the quality of the communication network between the cloud VR device and the cloud-side device is lower than a preset quality threshold, it may be determined that the quality of the network between the cloud VR device and the cloud-side device is generally or poorly, and may not meet the requirement of low latency.
In the embodiment of the invention, when the cloud VR device detects that the network quality between the cloud VR device and the cloud side equipment is general or poor, the network signal quality between the cloud VR device and the cloud side equipment can be determined to not meet the video frame reporting requirement. At this time, if the cloud-side device performs image rendering processing on the i+1th video frame, the MTP of the cloud VR device may be higher due to poor network quality between the cloud VR device and the cloud-side device.
To reduce the MTP of the cloud VR device, the cloud VR device may perform image rendering processing on the i+1th video frame and display the i+1th video frame subjected to the image rendering processing.
That is, when it is determined that the network signal quality with the cloud-side device does not meet the video frame reporting requirement, the cloud VR device may perform image rendering processing on the video frame only, without the cloud-side device participating in the image rendering processing of the video frame.
The video frames are processed through the cloud VR device, and the cloud VR device does not need to communicate with cloud side equipment, so that MTP time delay of the cloud VR device can be greatly reduced.
However, since the cloud VR device is required to perform image rendering processing on the i+1th video frame, the local computing power of the cloud VR device may be correspondingly high.
If the local load corresponding to the cloud VR device is high (i.e., the local computing power that the cloud VR device can use is low), the cloud VR device may not perform image rendering processing on the i+1th video frame or may not perform image rendering processing on the i+1th video frame in time. Therefore, after detecting that the network signal quality between the cloud-side device and the cloud-side device does not meet the requirement of video frame reporting, before performing image rendering processing on the (i+1) th video frame, it may be further determined that the local load corresponding to the cloud VR device is not greater than a first threshold.
In the embodiment of the present invention, the first threshold may be set according to the working capability of the cloud VR device. The first threshold may be characterized as the highest load that the cloud VR device is able to normally and timely image render the video frame. In other words, if the local load of the cloud VR device is greater than the first threshold, the cloud VR device is not characterized to perform image rendering processing on the video frame normally and timely.
If the local load of the cloud VR device is greater than the first threshold, the cloud VR device may warp display the i+1th video frame. Specifically, the cloud VR device may employ Asynchronous Time Warping (ATW) for display, and may also alleviate display jamming to some extent.
In the embodiment of the invention, when the cloud VR device detects that the network quality between the cloud VR device and the cloud side equipment is general or poor, the network signal quality between the cloud VR device and the cloud side equipment can be determined to not meet the video frame reporting requirement. At this time, the cloud VR device may perform image rendering processing on the i-th video frame and display the i-th video frame subjected to the image rendering processing.
When the (i+1) th video frame is subjected to image rendering processing, if the (i+1) th video frame is located in the second video frame set, the cloud VR device may send second indication information to the cloud side device, where the second indication information may be used to instruct the cloud side device to perform image rendering processing on the (i+1) th video frame and M-1 video frames after the (i+1) th video frame. After receiving the second indication information, the cloud side device can perform image rendering processing on the (i+1) th video frame and M-1 video frames after the (i+1) th video frame, and transmitting the (i+1) th video frame subjected to the image rendering process and the M-1 th video frame after the (i+1) th video frame subjected to the image rendering process to a cloud VR device, and displaying the (i+1) th video frame subjected to the image rendering process and the M-1 th video frame after the (i+1) th video frame subjected to the image rendering process by the cloud VR device.
If the (i+1) th video frame is located in the first video frame set, performing image rendering processing on the (i+1) th video frame, and displaying the (i+1) th video frame.
In the embodiment of the present invention, the video frames included in the first video frame set refer to video frames in which the cloud VR device locally performs image rendering processing; the video frames included in the second video frame set refer to video frames for which image rendering processing is performed by the cloud-side apparatus.
In implementations, the video frames in the first set of video frames and the video frames in the second set of video frames may be pre-assigned.
Referring to fig. 2, a schematic diagram of a video frame distribution in an embodiment of the present invention is provided.
In fig. 2, L1 represents a first video frame set, i.e., a set of video frames LF rendered by the cloud VR device at time 1. M1 represents a set of video frames CF rendered by the cloud-side apparatus at time 1.
Accordingly, L2 represents a set of video frames LF rendered by the cloud VR device at time 2, and M2 represents a set of video frames CF rendered by the cloud-side device at time 2. Similarly, li represents a set of video frames LF rendered by the cloud VR device at the i-th time after the current time, and Mi represents a set of video frames CF rendered by the cloud-side device at the i-th time after the current time.
Referring to fig. 2, on a time axis, if the i-th video frame is the last video frame of L1 and the i+1th video frame is the first video frame of L2, L2 includes M video frames, image rendering processing is performed on the M video frames included in L2 by the cloud side device. If the ith video frame is not the last video frame of L1, continuing to perform image rendering processing on the (i+1) th video frame by the local VR device.
For example, the i-th video frame is the 2 nd video frame in L1, and L1 includes 4 video frames, then image rendering processing is continued on the 3 rd video frame by the VR device.
The number of video frames comprised by the different sets of video frames may be equal or unequal. For example, the number of video frames included in each video frame set is equal, that is, the number of video frames included in L1, the number of video frames included in M1, the number of video frames included in L2, … …, the number of video frames included in Li, and the number of video frames included in Mi are equal.
For another example, the numbers of video frames included in L1, L2, … …, and Li are 4, and the numbers of video frames included in M1, M2, … …, and Mi are 6. Or the number of video frames contained in L1 is 4, the number of video frames contained in L2 is 2, … …, and the number of video frames contained in Li is 3.
In a specific implementation, the number of video frames included in different video frame sets may be set according to an actual application scenario.
In implementations, the video frames in the first set of video frames and the video frames in the second set of video frames may also be dynamically adjusted.
In the embodiment of the invention, the number of video frames in the first video frame set and the number of video frames in the second video frame set can be adjusted according to preset adjusting conditions. The preset adjustment conditions may include one or more of the following: signal quality between the cloud VR device and the cloud-side device, network service quality, traffic tariff information, and local computing power that the cloud VR device can use.
In the embodiment of the present invention, when the local computing power that can be used by the cloud VR device (generally including the computing capability of the CPU and/or the computing capability of the GPU of the cloud VR device) is high and the quality of the communication network with the cloud-side device is detected to be lower than the preset quality threshold, the number of video frames that are rendered by the cloud VR device may be increased appropriately, and the number of video frames that are rendered by the cloud-side device may be reduced accordingly.
For example, the local computing power corresponding to the cloud VR device is high. At time 1, the number of video frames rendered by the cloud VR device is 4, and the number of video frames rendered by the cloud-side device is 4. At the 2 nd moment, if the cloud VR device detects that the value of the air channel BLER is greater than the preset BLER threshold, the number of video frames rendered by the cloud VR device is adjusted to be 5, and the number of video frames rendered by the cloud side device is adjusted to be 3.
As another example, the local computing power corresponding to the cloud VR device is higher. At time 1, the number of video frames rendered by the cloud VR device is 4, and the number of video frames rendered by the cloud-side device is 4. At the 2 nd moment, when the cloud VR device detects that the current flow tariff is about to reach the preset limit value, the number of video frames rendered by the cloud VR device is adjusted to be 6, and the number of video frames rendered by the cloud side device is adjusted to be 2.
In the embodiment of the invention, when the local computing power corresponding to the cloud VR device is low and the communication network quality between the cloud VR device and the cloud side equipment is detected to be lower than the preset quality threshold, the cloud VR device can correspondingly increase the number of video frames subjected to rendering processing by the cloud side equipment and correspondingly reduce the number of video frames subjected to rendering processing by the cloud VR device because the cloud VR device renders more video frames which possibly cause the display to be blocked.
For example, the cloud VR device corresponds to a lower local computing power. At time 1, the number of video frames rendered by the cloud VR device is 3, and the number of video frames rendered by the cloud-side device is 5. At time 2, the cloud VR device detects that the current traffic charge is sufficient, the number of video frames rendered by the cloud VR device is adjusted to 2, and the number of video frames rendered by the cloud side device is adjusted to 6.
In a specific implementation, the number of video frames that are rendered by the cloud VR device and the cloud-side device may be adjusted according to the degree of variation of the quality of the communication network.
For example, at time 1, the number of video frames rendered by the cloud VR device is 4, and the number of video frames rendered by the cloud-side device is 4. At time 2, if the cloud VR device detects that the SINR of the air interface channel is less than-3 dB, the number of video frames rendered by the cloud VR device is adjusted to 5, and the number of video frames rendered by the cloud-side device is adjusted to 3. At time 3, when the cloud VR device detects that the SINR of the air interface channel is less than-6 dB, the number of video frames rendered by the cloud VR device is adjusted to 6, and the number of video frames rendered by the cloud-side device is adjusted to 2.
From the above, through the cooperative processing of the local cloud VR device and the cloud side device, the MTP time delay of the cloud VR device can be reduced, and the occurrence of display blocking is reduced. When the local computing power of the cloud VR device is strong, the number of video frames of the first video frame set can be properly increased, the number of video frames of the second video frame set can be correspondingly reduced, and the MTP time delay of the cloud VR device can be further reduced. When the local computing power of the cloud VR device is weak, the number of video frames of the first video frame set can be properly reduced, and the number of video frames of the second video frame set can be correspondingly increased, so that the image quality of a display picture of the cloud VR device is improved to a certain extent.
Referring to fig. 3, another image processing method in the embodiment of the present invention is given, and detailed description is given below through specific steps.
In step S301, when the ith video frame is displayed, a trigger operation by the user is detected.
In the embodiment of the present invention, the specific execution process of step S301 may correspond to reference to step S101, which is not described herein.
Step S302, responding to the triggering operation, determining that the first image rendering process is performed on the (i+1) th video frame locally according to the flow tariff information and the local calculation force, determining that the cloud side equipment performs the second image rendering process on the (i+1) th video frame, and sending third indication information indicating the second image rendering process on the (i+1) th video frame to the cloud side equipment.
In the embodiment of the present invention, the first image rendering process and the second image rendering process may be the same image rendering process method, or may be different image rendering process methods.
In a specific implementation, the first image rendering process may be any one of a rich detail model process and a basic detail model process, and the second image rendering process may also be any one of a rich detail model process and a basic detail model process. Specifically, the cloud VR device selects which processing method is used as a specific implementation of the first image rendering process and the second image rendering process, which may be determined by the traffic tariff information and the local computing power available to the cloud VR device.
Step S303, performing a first image rendering process on the i+1th video frame, and receiving the i+1th video frame subjected to the second image rendering process, which is sent by the cloud side device.
Step S304, the i+1th video frame subjected to the first image rendering process and the i+1th video frame subjected to the second image rendering process are combined, and the combined i+1th video frame is displayed.
In a specific implementation, the cloud VR device may determine, according to current traffic tariff information and available local computing power, a manner of performing image rendering processing on the i+1th video frame. Correspondingly, the cloud side device can also determine the mode of performing image rendering processing on the (i+1) video frame according to the current flow tariff information and the computing power of the cloud VR device.
In an embodiment of the present invention, the manner in which the video frame is rendered may include rich detail model processing and basic detail model processing. The rich detail model represents more triangular patches and has more granularity, the model information is large, the rich detail model can be used for enriching characteristic parts, edges and detail textures of images, but the rendering operation amount is large; the basic detail model has the advantages of less triangular patches, less detail, less model information, less detail structure presentation of images and less rendering operation.
In the embodiment of the invention, when the flow charge is higher and the local calculation power which can be used by the cloud VR device is sufficient, the cloud side equipment can adopt the basic detail model to render the video frame, and the cloud VR device can adopt the rich detail model to render the video frame. And then, a rendering engine in the cloud VR device carries out interpolation operation on the video frames rendered based on the basic detail model and the video frames rendered based on the rich detail model output by the cloud side device to obtain a final output video frame.
For example, if the cloud VR device detects that the current traffic charge of the current day reaches 10 yuan per day and the resource utilization rate of the local GPU of the cloud VR device is lower than 50%, the cloud VR device renders the i+1th video frame by using a rich detail model, and the cloud side device renders the i+1th video frame by using a basic detail model. And then, the cloud VR device performs interpolation and combination on the (i+1) th video frame rendered by the rich detail model and the (i+1) th video frame rendered by the basic detail model to obtain and display a final output video frame.
When the flow charge is higher and the local computing power which can be used by the cloud VR device is insufficient, the cloud side device can render the video frame by adopting the basic detail model, and the cloud VR device can render the video frame by adopting the basic detail model. And then, a rendering engine in the cloud VR device carries out interpolation operation on the video frame rendered based on the basic detail model and the video frame rendered based on the basic detail model output by the cloud side device to obtain a final output video frame.
For example, if the cloud VR device detects that the current traffic charge of the current day reaches 10 yuan per day and the resource utilization rate of the local GPU of the cloud VR device is higher than 50%, the cloud VR device renders the i+1th video frame by using the basic detail model, and the cloud side device renders the i+1th video frame by using the basic detail model for the i+1th video frame. And then, the cloud VR device performs interpolation and combination on the (i+1) th video frame rendered by the basic detail model and the (i+1) th video frame rendered by the basic detail model to obtain and display a final output video frame.
When the flow charge is low and the local computing power which can be used by the cloud VR device is insufficient, the cloud side device can render the video frames by adopting the rich detail model, and the cloud VR device can render the video frames by adopting the basic detail model. And then, a rendering engine in the cloud VR device carries out interpolation operation on the video frames rendered based on the rich detail model and the video frames rendered based on the basic detail model output by the cloud side device to obtain a final output video frame.
For example, if the cloud VR device detects that the current traffic charge of the current day is not limited by 10 yuan per day and the resource utilization rate of the local GPU of the cloud VR device is higher than 50%, the cloud VR device renders the i+1th video frame by using the basic detail model, and the cloud side device renders the i+1th video frame by using the rich detail model for the i+1th video frame. And then, the cloud VR device performs interpolation and combination on the (i+1) th video frame rendered by the basic detail model and the (i+1) th video frame rendered by the rich detail model to obtain and display a final output video frame.
When the flow charge is lower and the local computing power that can be used by the cloud VR device is sufficient, the cloud side device can render the video frames by adopting the rich detail model, and the cloud VR device can render the video frames by adopting the rich detail model. And then, a rendering engine in the cloud VR device carries out interpolation operation on the video frames rendered based on the rich detail model and the video frames rendered based on the rich detail model output by the cloud side device to obtain a final output video frame.
For example, if the cloud VR device detects that the current traffic charge of the current day is not limited by 10 yuan per day and the resource utilization rate of the local GPU of the cloud VR device is lower than 50%, the cloud VR device renders the i+1th video frame by using a rich detail model, and the cloud side device renders the i+1th video frame by using a rich detail model for the i+1th video frame. And then, the cloud VR device performs interpolation and combination on the (i+1) th video frame rendered by the rich detail model and the (i+1) th video frame rendered by the rich detail model to obtain and display a final output video frame.
In a specific implementation, after the combined image is displayed, if the cloud VR device does not receive a new combined video frame, the cloud VR device may display the combined image by using an Asynchronous Time Warping (ATW) technology, and may also alleviate the display jamming to a certain extent.
In summary, the MTP delay can be further reduced by determining whether to render a video frame using the rich detail model or to render a video frame using the basic detail model through the tariff information and the local computing power of the cloud VR device.
Referring to fig. 4, an image processing apparatus 40 in an embodiment of the present invention is provided, including: a first detection unit 401, a first transmission unit 402, and a first reception unit 403, wherein:
a first detecting unit 401, configured to detect a trigger operation of a user when displaying an i-th video frame;
The first sending unit 402 may be configured to respond to the triggering operation, and send first indication information to the cloud side device when it is detected that the network signal quality between the cloud side device and the cloud side device meets a video frame reporting requirement, where the first indication information is used to instruct the cloud side device to perform image rendering processing on the i+1th video frame;
The first receiving unit 403 may be configured to receive and display the i+1th video frame subjected to the image rendering processing sent by the cloud side device.
In specific implementation, the specific processing flows corresponding to the first detection unit 401, the first sending unit 402, and the first receiving unit 403 may correspond to the reference steps S101 to S103, which are not described herein.
In a specific implementation, the image processing device 40 may correspond to a processing chip having a data processing function in the cloud VR device, or corresponds to a chip module including the processing chip, or corresponds to the cloud VR device.
Referring to fig. 5, another image processing apparatus 50 in an embodiment of the present invention is provided, including: a second detection unit 501, a first determination unit 502, a second transmission unit 503, a first image processing unit 504, a second reception unit 505, and a combination unit 506, wherein:
A second detecting unit 501 configured to detect a trigger operation of a user when displaying an i-th video frame;
a first determining unit 502, configured to determine, in response to the triggering operation, that a first image rendering process is performed on an i+1th video frame locally according to traffic tariff information and a local computing force, and determine that a second image rendering process is performed on the i+1th video frame by a cloud side device;
a second sending unit 503, configured to send, to the cloud-side device, third instruction information that instructs second image rendering processing to the i+1th video frame;
A first image processing unit 504, configured to perform a first image rendering process on the i+1th video frame;
a second receiving unit 505, configured to receive an i+1th video frame sent by the cloud side device and subjected to a second image rendering process;
A merging unit 506, configured to merge the i+1th video frame subjected to the first image rendering process and the i+1th video frame subjected to the second image rendering process, and display the merged i+1th video frame; the first image rendering process and the second image rendering process are one of a rich detail model process and a basic detail model process.
In specific implementation, the specific execution flows of the second detection unit 501, the first determination unit 502, the second sending unit 503, the first image processing unit 504, the second receiving unit 505, and the merging unit 506 may correspond to the steps S301 to S304, and are not described herein.
In a specific implementation, the image processing apparatus 50 may correspond to a processing chip having a data processing function in the cloud VR apparatus, or corresponds to a chip module including the processing chip, or corresponds to the cloud VR apparatus.
In a specific implementation, regarding each apparatus and each module/unit included in each product described in the above embodiments, it may be a software module/unit, or a hardware module/unit, or may be a software module/unit partially, or a hardware module/unit partially.
For example, for each device or product applied to or integrated on a chip, each module/unit included in the device or product may be implemented in hardware such as a circuit, or at least some modules/units may be implemented in software program, where the software program runs on a processor integrated inside the chip, and the remaining (if any) part of modules/units may be implemented in hardware such as a circuit; for each device and product applied to or integrated in the chip module, each module/unit contained in the device and product can be realized in a hardware manner such as a circuit, different modules/units can be located in the same component (such as a chip, a circuit module and the like) or different components of the chip module, or at least part of the modules/units can be realized in a software program, the software program runs on a processor integrated in the chip module, and the rest (if any) of the modules/units can be realized in a hardware manner such as a circuit; for each device, product, or application to or integrated with the terminal, each module/unit included in the device, product, or application may be implemented in hardware such as a circuit, where different modules/units may be located in the same component (e.g., a chip, a circuit module, etc.) or different components in the terminal, or at least some modules/units may be implemented in a software program, where the software program runs on a processor integrated within the terminal, and the remaining (if any) some modules/units may be implemented in hardware such as a circuit.
The embodiment of the invention also provides a computer readable storage medium, which is a non-volatile storage medium or a non-transient storage medium, and a computer program is stored on the computer readable storage medium, and the computer program is executed by a processor to execute the steps of the image processing method provided by any embodiment.
The embodiment of the invention also provides another image processing device, which comprises a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and the processor executes the steps of the image processing method provided by any embodiment when running the computer program.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be implemented by a program that instructs related hardware, the program may be stored on a computer readable storage medium, and the storage medium may include: ROM, RAM, magnetic or optical disks, etc.
Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the invention, and the scope of the invention should be assessed accordingly to that of the appended claims.
Claims (13)
1. An image processing method, comprising:
when the ith video frame is displayed, detecting triggering operation of a user;
Responding to the triggering operation, and sending first indication information to cloud side equipment when detecting that the network signal quality between the cloud side equipment and the cloud side equipment meets the video frame reporting requirement, wherein the first indication information is used for indicating the cloud side equipment to conduct image rendering processing on an i+1th video frame;
receiving and displaying an i+1st video frame which is sent by the cloud side equipment and is subjected to image rendering processing;
when the network signal quality between the cloud side equipment and the cloud side equipment does not meet the video frame reporting requirement, performing image rendering processing on an ith video frame in a first video frame set; if the (i+1) th video frame is positioned in the second video frame set, sending second indication information to the cloud side device, wherein the second indication information is used for indicating the cloud side device to perform image rendering processing on the (i+1) th video frame and M-1 video frames after the (i+1) th video frame; receiving and displaying an i+1st frame subjected to image rendering processing and M-1 video frames after the i+1st frame subjected to image rendering processing, wherein the i+1st frame is sent by the cloud side equipment; if the (i+1) th video frame is positioned in the first video frame set, performing image rendering processing on the (i+1) th video frame and displaying the (i+1) th video frame; the first set of video frames includes video frames that locally perform image rendering processing, and the second set of video frames includes video frames that perform image rendering processing by the cloud-side device.
2. The image processing method according to claim 1, further comprising: when the network signal quality between the cloud side equipment and the cloud side equipment is detected to not meet the video frame reporting requirement, performing image rendering processing on the (i+1) th video frame, and displaying the (i+1) th video frame subjected to the image rendering processing.
3. The image processing method according to claim 2, further comprising, before performing image rendering processing on the i+1th video frame:
It is determined that the local load is not greater than a first threshold.
4. The image processing method according to claim 3, further comprising: and if the local load is determined to be larger than the first threshold value, displaying the (i+1) th video frame in a distorting mode.
5. The image processing method of claim 1, wherein video frames in the first set of video frames and the second set of video frames are pre-assigned.
6. The image processing method according to claim 1, wherein the number of video frames in the first video frame set and the second video frame set is adjusted according to a preset adjustment condition; the preset adjusting conditions comprise at least one of the following: and the channel quality, the network service quality, the flow tariff information and the local calculation power are arranged between the cloud side equipment and the cloud side equipment.
7. An image processing method, comprising:
when the ith video frame is displayed, detecting triggering operation of a user;
Responding to the triggering operation, determining that the i+1th video frame is subjected to first image rendering processing locally according to flow tariff information and local computing power, determining that cloud side equipment is subjected to second image rendering processing on the i+1th video frame, and sending third indication information indicating that the i+1th video frame is subjected to second image rendering processing to the cloud side equipment;
Performing first image rendering processing on the (i+1) th video frame, and receiving the (i+1) th video frame subjected to second image rendering processing, which is sent by the cloud side equipment;
Combining the (i+1) th video frame subjected to the first image rendering process with the (i+1) th video frame subjected to the second image rendering process, and displaying the combined (i+1) th video frame; the first image rendering process and the second image rendering process are one of a rich detail model process and a basic detail model process.
8. The image processing method according to claim 7, wherein the determining that the i+1th video frame is subjected to the first image rendering process locally, and the determining that the cloud-side device is subjected to the second image rendering process for the i+1th video frame, comprises:
If the flow tariff corresponding to the flow tariff information does not reach a preset tariff value and the local calculation force is lower than a preset threshold value, determining that the first image rendering process is rich detail model processing and the second image rendering process is rich detail model processing;
if the flow tariff corresponding to the flow tariff information does not reach a preset tariff value and the local calculation force is higher than a preset threshold value, determining that the first image rendering process is basic detail model process and the second image rendering process is rich detail model process;
if the flow tariff corresponding to the flow tariff information reaches the preset tariff value and the local calculation force is lower than the preset threshold value, determining that the first image rendering process is rich detail model process and the second image rendering process is basic detail model process;
And if the flow tariff corresponding to the flow tariff information reaches the preset tariff value and the local calculation force is higher than the preset threshold value, determining that the first image rendering process is basic detail model processing and the second image rendering process is basic detail model processing.
9. The image processing method according to claim 7 or 8, characterized by further comprising: and if the combined (i+1) th video frame is not received, displaying the (i+1) th video frame in a twisting way.
10. An image processing apparatus, comprising:
the first detection unit is used for detecting triggering operation of a user when the ith video frame is displayed;
The first sending unit is used for responding to the triggering operation, and sending first indication information to the cloud side equipment when the network signal quality between the cloud side equipment and the cloud side equipment is detected to meet the video frame reporting requirement, wherein the first indication information is used for indicating the cloud side equipment to conduct image rendering processing on the (i+1) th video frame;
the first receiving unit is used for receiving and displaying the (i+1) th video frame which is sent by the cloud side equipment and is subjected to image rendering processing;
The first processing unit is used for performing image rendering processing on an ith video frame in the first video frame set when detecting that the network signal quality between the first processing unit and the cloud side equipment does not meet the video frame reporting requirement; if the (i+1) th video frame is positioned in the second video frame set, sending second indication information to the cloud side device, wherein the second indication information is used for indicating the cloud side device to perform image rendering processing on the (i+1) th video frame and M-1 video frames after the (i+1) th video frame; receiving and displaying an i+1st frame subjected to image rendering processing and M-1 video frames after the i+1st frame subjected to image rendering processing, wherein the i+1st frame is sent by the cloud side equipment; if the (i+1) th video frame is positioned in the first video frame set, performing image rendering processing on the (i+1) th video frame and displaying the (i+1) th video frame; the first set of video frames includes video frames that locally perform image rendering processing, and the second set of video frames includes video frames that perform image rendering processing by the cloud-side device.
11. An image processing apparatus, comprising:
the second detection unit is used for detecting triggering operation of a user when the ith video frame is displayed;
The first determining unit is used for responding to the triggering operation, determining that the i+1th video frame is subjected to first image rendering processing locally according to flow tariff information and local computing force, and determining that cloud side equipment is subjected to second image rendering processing on the i+1th video frame;
a second sending unit, configured to send third indication information indicating that the i+1th video frame is subjected to second image rendering processing to the cloud side device;
A first image processing unit, configured to perform a first image rendering process on the (i+1) th video frame;
The second receiving unit is used for receiving the (i+1) th video frame which is sent by the cloud side equipment and is subjected to second image rendering processing;
A merging unit for merging the i+1th video frame subjected to the first image rendering process with the i+1th video frame subjected to the second image rendering process, and displaying the merged i+1th video frame; the first image rendering process and the second image rendering process are one of a rich detail model process and a basic detail model process.
12. A computer-readable storage medium, which is a non-volatile storage medium or a non-transitory storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, performs the steps of the image processing method according to any one of claims 1 to 9.
13. An image processing apparatus comprising a memory and a processor, said memory having stored thereon a computer program executable on said processor, characterized in that said processor executes the steps of the image processing method according to any of claims 1 to 9 when said computer program is executed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111665600.6A CN114302125B (en) | 2021-12-30 | 2021-12-30 | Image processing method and device and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111665600.6A CN114302125B (en) | 2021-12-30 | 2021-12-30 | Image processing method and device and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114302125A CN114302125A (en) | 2022-04-08 |
CN114302125B true CN114302125B (en) | 2024-09-03 |
Family
ID=80974303
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111665600.6A Active CN114302125B (en) | 2021-12-30 | 2021-12-30 | Image processing method and device and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114302125B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111901635A (en) * | 2020-06-17 | 2020-11-06 | 北京视博云信息技术有限公司 | Video processing method, device, storage medium and equipment |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107274469A (en) * | 2017-06-06 | 2017-10-20 | 清华大学 | The coordinative render method of Virtual reality |
US10846042B2 (en) * | 2018-10-31 | 2020-11-24 | Hewlett Packard Enterprise Development Lp | Adaptive rendering for untethered multi-user virtual reality |
CN109587555B (en) * | 2018-11-27 | 2020-12-22 | Oppo广东移动通信有限公司 | Video processing method and device, electronic equipment and storage medium |
CN112738553A (en) * | 2020-12-18 | 2021-04-30 | 深圳市微网力合信息技术有限公司 | Self-adaptive cloud rendering system and method based on network communication quality |
CN113706673B (en) * | 2021-07-29 | 2024-07-23 | 中国南方电网有限责任公司超高压输电公司 | Cloud rendering frame platform applied to virtual augmented reality technology |
-
2021
- 2021-12-30 CN CN202111665600.6A patent/CN114302125B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111901635A (en) * | 2020-06-17 | 2020-11-06 | 北京视博云信息技术有限公司 | Video processing method, device, storage medium and equipment |
Also Published As
Publication number | Publication date |
---|---|
CN114302125A (en) | 2022-04-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20170134461A1 (en) | Method and device for adjusting definition of a video adaptively | |
EP3253064A1 (en) | Frame loss method for video frame and video sending apparatus | |
CN114827662B (en) | Video resolution adaptive adjustment method, device, equipment and storage medium | |
US20200267396A1 (en) | Human visual system adaptive video coding | |
CN108600675B (en) | Channel path number expansion method, device, network video recorder and storage medium | |
CN116635885A (en) | Apparatus and method for optimizing power consumption in frame rendering process | |
DE112009002346T5 (en) | Processing video data in devices with limited resources | |
CN116506665A (en) | VR streaming method, system, device and storage medium for self-adaptive code rate control | |
Yang et al. | Delay-optimized multi-user VR streaming via end-edge collaborative neural frame interpolation | |
CN110858388B (en) | Method and device for enhancing video image quality | |
CN112804527B (en) | Image output method, image output device and computer-readable storage medium | |
CN114302125B (en) | Image processing method and device and computer readable storage medium | |
CN113573142A (en) | Resolution adjustment method and device | |
CN102243856A (en) | Method and device for dynamically switching screen data processing modes | |
CN113315999A (en) | Virtual reality optimization method, device, equipment and storage medium | |
CN116996639B (en) | Screen-projection frame rate acquisition method and device, computer equipment and storage medium | |
CN110912922B (en) | Image transmission method and device, electronic equipment and storage medium | |
WO2023151644A1 (en) | Adaptive loading-aware system management for balancing power and performance | |
US20230067568A1 (en) | Frame Sequence Quality Booster under Uneven Quality Conditions | |
US11936698B2 (en) | Systems and methods for adaptive video conferencing | |
CN115484382A (en) | Parameter control method, electronic device, computer storage medium, and program product | |
CN112774193A (en) | Image rendering method of cloud game | |
CN113766315A (en) | Display device and video information processing method | |
CN112019918A (en) | Video playing method and device | |
WO2023031989A1 (en) | Video delivery device, system, method, and computer readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |