CN111083495A - Rapid and efficient 3D-HEVC (high efficiency video coding) method for reducing complexity - Google Patents
Rapid and efficient 3D-HEVC (high efficiency video coding) method for reducing complexity Download PDFInfo
- Publication number
- CN111083495A CN111083495A CN201911149001.1A CN201911149001A CN111083495A CN 111083495 A CN111083495 A CN 111083495A CN 201911149001 A CN201911149001 A CN 201911149001A CN 111083495 A CN111083495 A CN 111083495A
- Authority
- CN
- China
- Prior art keywords
- treeblock
- motion
- depth
- current
- complexity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/567—Motion estimation based on rate distortion criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/96—Tree coding, e.g. quad-tree coding
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention provides a fast and efficient 3D-HEVC method for reducing complexity, which comprises the following steps: firstly, starting the decision of a tree block, and deriving prediction variables on the corresponding tree block of space-time, previous coding view and texture depth; identifying the motion complexity of the adjacent tree blocks and the current tree block according to the prediction variables, and dividing the current tree block into static and complex motion tree blocks; adaptively skipping CU depth levels of texture treeblocks and depth treeblocks according to motion characteristics of static or complex motion treeblocks; and calculating the rate distortion cost of the current tree block and the rate distortion cost value of the adjacent tree block according to the prediction variables, skipping unnecessary mode decisions and terminating the mode prediction in advance. According to the invention, by executing the rapid CU depth level range determining method and the self-adaptive early termination mode prediction method, the calculation complexity of the 3D-HEVC encoder can be reduced, and the encoding time of the HTM is effectively saved on the premise of ensuring the rate distortion cost value performance.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a fast and efficient 3D-HEVC method for reducing complexity.
Background
3D high efficiency video coding (3D-HEVC) is the latest HEVC standard development aimed at improving the compression performance of multi-view video plus depth (MVD) formats. It uses several additional coding tools to better represent the associated texture and depth data. To improve the coding efficiency of MVD, the latest video standard HEVC developed 3D high efficiency video coding (3D-HEVC).
To efficiently compress MVD data, other tools are designed to exploit dependencies between components. However, those extra coding tools bring significant computational complexity to the encoder, and actually hinder the application, and the complexity of 3D-HEVC becomes a crucial issue. Therefore, it needs to further reduce computational complexity without sacrificing coding performance, thereby facilitating wider application of 3D-HEVC.
The fast intra mode decision method and CU depth level size prediction method of HEVC encoder are designed only for texture video, and do not utilize the properties of the depth map attributes or the new intra prediction mode, so there is still a need to further improve the performance of the mode decision process in 3D-HEVC.
Some of the most advanced studies are currently proposed for 3D-HEVC fast coding. The main classification is two or three main categories: reducing the complexity of depth coding, reducing the complexity of texture video coding, and reducing the complexity of texture and depth coding. With respect to the first class of algorithms, c.park proposes a fast decision method that assigns DMM complexity to different strategies based on edge classification, which can adaptively skip unused DMM modes. Zhang et al propose a fast intra coding that can terminate the quadtree partitioning of depth maps in advance, can adaptively detect angle points and reassign partition levels. Lei et al propose a fast mode decision to reduce candidate modes in the depth coding process, where inter-view and gray level similarity correlations are jointly exploited to search for the best PU mode; regarding the second kind of algorithm, q.zhang et al proposes a fast texture video mode decision, which achieves 3D-HEVC computational complexity reduction based on depth information, and this method utilizes depth map value correlation to simplify the coding process of texture video. H.r.tohidypours et al propose an on-line learning based method to speed up texture coding, and in addition the algorithm can adjust the prediction mode search in texture video coding; with respect to the third category of algorithms, l.shen et al propose a new mode decision method that adaptively adjusts the mode decision process by using the correlation of neighboring CU depth levels and the correlation of texture depths in order to speed up the most time consuming prediction process. Shen et al propose an efficient CU processing method to save coding time for real-time applications.
The above method can effectively accelerate 3D-HEVC coding time and maintain almost the same video quality as the original encoder. However, important information between inter-view, spatio-temporal and texture depths is not fully exploited in these algorithms and there is a need to further reduce the complexity of the encoder.
Disclosure of Invention
Aiming at the defects in the background art, the invention provides a fast and efficient 3D-HEVC (high efficiency video coding) method for reducing complexity, and solves the technical problem of high complexity of the existing 3D-HEVC coding method.
The technical scheme of the invention is realized as follows:
a fast and efficient 3D-HEVC method for complexity reduction, comprising the steps of:
s1, starting the decision of the treeblock, and deriving the prediction variables on the corresponding treeblocks of space-time, previous coding views and texture depth;
s2, identifying the motion complexity of the adjacent tree block and the current tree block according to the predictive variables in the step S1, and dividing the current tree block into static and complex motion tree blocks;
s3, skipping the CU depth level of the texture treeblock and the depth treeblock according to the motion characteristics of the static or complex motion treeblock;
s4, calculating the rate distortion cost of the current tree block and the rate distortion cost value of the adjacent tree block according to the prediction variables in the step S1, and determining the minimum value of the rate distortion cost values of the adjacent tree blocks as a threshold THr of mode decision;
s5, determining whether the rate-distortion cost value of the current tree block in step S4 is less than the threshold THr, if so, skipping unnecessary mode decision, terminating the mode prediction in advance, and determining the best mode from the full mode.
The coding predictor corresponding to the predictor variable in step S1 is:
ψ={TS,TT,TI,TTD},
wherein, TSAs a spatial predictor, TTFor being located in the current texture treeblock TCTemporal predictor of same position, TIFor inter-view predictors, T, in neighboring coded viewsTDA texture depth predictor corresponding to the depth map view.
Defining the motion vectors of the current texture treeblock and the covering block of the corresponding treeblock according to the motion complexity of the adjacent treeblock and the current treeblock as follows: MV (Medium Voltage) data baseij=(MVij,MVyij) Then the motion complexity in the horizontal and vertical directions is defined as:
where ψ is a motion predictor, MCx is a horizontal motion complexity, MCy is a vertical motion complexity, T is a total number of the current treeblock and its neighboring treeblocks, ρijRepresenting a weight factor;
the motion complexity parameters are: MC ═ MCx + MCy;
from the motion complexity parameter MC, the current treeblock TcIt is divided into two types, static and complex motion treeblock:
where R is a threshold factor that determines treeblocks with static or complex motion.
The threshold THr of the mode decision in step S4 is:
THr=μ·{RDcostP1,RDcostP2,RDcostP3,RDcostP4,RDcostTpredict},
wherein μ is an adjustment parameter, P1、P2、P3、P4Are all spatial predictors, RDcostP1、RDcostP2、RDcostP3、RDcostP4Rate-distortion cost values, RDcosT, of neighboring treeblocks that are both current treeblockspredictA rate distortion cost value of a neighboring predictor for the current treeblock, and
where ψ is the motion predictor, i is the number of neighboring treeblocks, RDcosti is the rate-distortion cost value of the neighboring treeblocks, αiRepresenting the treeblock weight parameter, ξiFor adjustment, χ, δ, ε, and γ are all mode weighting factors.
The accuracy of the early termination in step S5 is:
ο=NE/NF,
wherein N isENumber of CU depth levels for early termination, NFThe number of CU depth levels.
The beneficial effect that this technical scheme can produce: the method extracts the mode prediction characteristics for the 3D-HEVC encoder by jointly utilizing the important coding information among views, space-time and texture depths, then optimizes the mode prediction modes of the current textures and the depth tree block, and effectively saves the coding time of the HTM on the premise of ensuring the rate distortion cost value performance.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of the present invention.
Fig. 2 shows the prediction factor of the current treeblock CU depth level according to the present invention.
Fig. 3 is a relationship curve in depth map coding in the present invention.
Fig. 4 is a comparison result curve of the overall saving of encoding time based on "Kendo" video of the present invention and FCUDR, ESMD, AETMP methods.
Fig. 5 is a comparison result curve of the overall saving coding time of the present invention and FCUDR, ESMD, AETMP method based on "Poznan _ Street" video.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
As shown in fig. 1, the embodiment of the present invention provides a fast and efficient 3D-HEVC method for reducing complexity, which first jointly uses coding information of inter-view, spatio-temporal and texture depth to extract mode prediction features for a 3D-HEVC encoder, and then optimizes the mode prediction modes of the current texture and depth treeblock, thereby providing two methods: a fast CU depth level range determination method and an adaptive early termination mode prediction method; the method comprises the following specific steps:
s1, starting tree block decision, and deriving prediction variables on corresponding tree blocks of space-time, previous coding views and texture depths; since 3D-HEVC introduces additional coding tools, especially for depth maps, it has the same quadtree structure as HEVC. The mode prediction process of 3D-HEVC is performed using all coding modes to select the best coding mode with the smallest rate-distortion (RD) cost. The calculation method of the rate distortion cost value comprises the following steps:
Jmode=(SSEluma+ωchroma·SSEchroma)+λmode·Rmode(1),
wherein, SSElumaFor distortion between reconstructed treeblocks on the luminance component of the current treeblock, SSEchromaFor distortion between reconstructed treeblocks on the chrominance component of the current treeblock, ωchromaAs a weighting parameter, λmodeIs a beamLambertian multiplier, RmodeAt the total bit rate cost. This "trying all and then selecting the best" can improve the coding efficiency of the encoder, but requires significant complexity. All new modes added to the full RD search list in the 3D-HEVC encoder, compared to HEVC, but these techniques will result in high complexity and hinder the application of 3D-HEVC.
Since the coding information between the current treeblock and its temporal and spatial neighbors is similar, this feature will be used to analyze the current treeblock properties and then omit some unnecessary prediction modes. Furthermore, exploiting inter-view correlation may reduce MVD data redundancy in 3D-HEVC encoders. There is view similarity in the multi-view content, which makes the coding parameters between these multi-views relevant.
The coding information between the current treeblock and its corresponding treeblock in the adjacent independent view has a correlation, and thus, the CU depth level in the current treeblock can be determined based on the corresponding treeblock in the adjacent independent view; since the texture and depth images in an MVD represent the same scene, the coding features may be similar in the mode decision process. There is a correlation in the encoding of the current treeblock of the depth map in texture video with its co-located treeblock, and the best depth value of the texture video treeblock is likely to be the best depth value of the depth map treeblock. Therefore, it takes advantage of the similarity of coding information between depth and texture to save coding time of 3D-HEVC.
Using the coding dependencies from spatio-temporal, previously coded views and texture depths, the coding predictor ψ is defined as:
ψ={TS,TT,TI,TTD} (2),
wherein, TSAs a spatial predictor, TTFor being located in the current texture treeblock TCTemporal predictor of same position, TIFor inter-view predictors, T, in neighboring coded viewsTDTexture depth predictor for a corresponding depth map view the corresponding treeblock of the texture map depth map is the treeblock that the current treeblock corresponds to in the texture map depth map. The coded views are stored in a memory and are subjected to pre-encodingHorological needs refer to previously coded views. The mode prediction characteristics of a 3D-HEVC (high efficiency video coding) encoder are extracted by utilizing coding information of inter-view, space-time and texture depth, namely a coding predictor, and the lower optimization is carried out according to the coding predictor, namely a rapid CU depth level range determining method and an adaptive early termination mode prediction method. Here the CU block decisions include fast depth level ranging and adaptive early termination mode decisions. Based on the coding information in the predictor, the current treeblock may be extracted to skip unnecessary variable size prediction modes.
And S2, identifying the motion complexity of the adjacent tree blocks and the current tree block according to the prediction variables in the step S1, and dividing the current tree block into static and complex motion tree blocks.
In 3D-HEVC, the CU depth levels of the quadtree structure are also used for compressing texture and depth data, the CU depth levels having a fixed range for all texture and depth coding. Exploring ME characteristics of the current treeblock based on motion information of the motion predictor presents a new criterion to identify the motion complexity of the current treeblock between adjacent treeblocks. Defining the motion vectors of the current texture treeblock and the covering block of the corresponding treeblock according to the motion complexity of the adjacent treeblock and the current treeblock as follows: MV (Medium Voltage) data baseij=(MVij,MVyij) Then the motion complexity in the horizontal and vertical directions is defined as:
where ψ is a motion predictor, MCx is a horizontal motion complexity, MCy is a vertical motion complexity, T is a total number of the current treeblock and its neighboring treeblocks, ρijThe weight factor is represented depending on the relevance of the current treeblock and its neighboring treeblocks to the current CU. The motion complexity parameters are: MC ═ MCx + MCy.
From the motion complexity parameter MC, the current treeblock TcAre classified into static and complexTwo types of motion treeblocks:
where R is a threshold factor that determines treeblocks with static or complex motion.
S3, skipping the CU depth level of the texture treeblock and the depth treeblock according to the motion characteristics of the static or complex motion treeblock; the self-adaptation is a process of automatically adjusting a processing method, a processing sequence, processing parameters, boundary conditions or constraint conditions according to data characteristics of processing data in the processing and analyzing processes so as to adapt to statistical distribution characteristics and structural characteristics of the processed data to obtain the optimal processing effect. The threshold factor is crucial for mode decision in 3D-HEVC, which may balance complexity reduction and coding quality. When the current treeblock is a complex motion region, the optimal depth level range is set for texture coding and depth coding. Based on the simulation results, the optimal values were found to depend on the content of each sequence. In fact, for treeblocks in static areas, small depth values occur very frequently. On the other hand, small depth values are rarely selected for treeblocks with complex motion regions, and CU depth levels are adaptively skipped by exploiting the motion characteristics of the texture and depth treeblocks.
S4, calculating the rate distortion cost of the current treeblock and the rate distortion cost value of the adjacent treeblocks according to the prediction variables in the step S1, and determining a mode decision threshold THr of the minimum value of the rate distortion cost values of the adjacent treeblocks; in fact, in most cases, inter-view, spatio-temporal and texture depth coding information is closely related to the current treeblock in the mode decision process of 3D-HEVC, since treeblocks have a large number of background or homogeneous regions. According to the correlation between the coding information of inter-view, space-time and texture depth and the current treeblock in the mode decision process of 3D-HEVC, the RD cost of the current treeblock is closely related to the adjacent treeblocks. In 3D-HEVC, the mode prediction process finds the best mode from all candidate modes by RD optimization, an early termination strategy is proposed in 3D-HEVC, and some unnecessary mode decisions are skipped when the minimum RD cost of the current treeblock (texture video or depth map) is less than a threshold for early termination.
The threshold THr of the mode decision in step S4 is:
THr=μ·{RDcostP1,RDcostP2,RDcostP3,RDcostP4,RDcostTpredict} (6),
wherein μ is an adjustment parameter, P1、P2、P3、P4Are all spatial predictors, RDcostP1、RDcostP2、RDcostP3、RDcostP4The rate distortion cost values of the adjacent treeblocks which are the current treeblock are calculated by adopting a formula (1), and RDcosTpredictA rate distortion cost value of a neighboring predictor for the current treeblock, and
where ψ is a motion predictor, i is the number of adjacent treeblocks, RDcosti is the RD cost value of the ith adjacent treeblock, RDcostP1、RDcostP2、RDcostP3、RDcostP4Both being part of RDcosti αiRepresenting the treeblock weight parameter, ξiFor adjustment, χ, δ, ε, and γ are all mode weight factors, adjustment factor ξ when neighboring treeblocks are availableiSet to 1, otherwise, factor ξ is adjustediIs set to 0. The mode weight factors χ, δ, ε, and γ are set to 0.4, 0.2, and 0.2, respectively. Wherein adjusting the parameter μ saves 3D-HEVC coding time while maintaining high accuracy. The parameter mu is adjusted to 1.30 mu by analysis>1.30 and mu<1.30, the relationship between the value of the adjustment parameter μ and the accuracy of the early termination of the texture map treeblock and the depth map treeblock is derived, and fig. 3 shows the relationship between the value of μ and the accuracy of the early termination of the depth map treeblock. The accuracy of the early termination in each case is analyzed on the basis of the adjusted parameter μ, with the final conclusion that: for treeblocks of texture video, the threshold for texture coding is set to 1.30 and the threshold for depth map coding is set to 1.05.
S5, determining whether the rate-distortion cost value of the current tree block in step S4 is less than the threshold THr, if so, skipping unnecessary mode decision, terminating the mode prediction in advance, and determining the best mode from the full mode.
The accuracy of the early termination in step S5 is:
ο=NE/NF(8),
wherein N isENumber of CU depth levels for early termination, NFThe number of CU depth levels. Number N according to CU depth levelFAnd finally, the optimal coding mode is obtained, and the complexity is reduced.
To evaluate the performance of the proposed method, the proposed fast method is implemented on a 3D-HEVC encoder (HTM 16.1). Eight MVD sequences of two resolutions (1024 × 768 and 1920 × 1088) recommended by JCT-3V were used in the Common Test Condition (CTC). Each test MVD sequence contains three views. Among these sequences, "Shark", "Undo _ Dancer" and "GT _ Fly" are synthetic video sequences with high precision depth maps, and "Kendo", "balloon", "news paper", "Poznan _ Hall 2" and "Poznan _ Street" are natural video sequences with estimated depth maps. The coding results of the present invention compared to the original encoder are shown in table 1 and fig. 4 and fig. 5, which relate to fast CU depth level range determination (FCUDR) and Adaptive Early Termination Mode Prediction (AETMP). The FCUDR policy may skip unnecessary depth levels. The AETMP strategy RD efficiency is hardly lost. This result indicates that AETMP can effectively terminate ME and DE early on unnecessary CU sizes. Verification results show that the fast scheme provided by the invention can save 70.2% of the running time of a 3D-HEVC encoder, and the RD performance loss is negligible.
Table 1 encoding results of the present invention and the original encoder
Test sequence | Code rate increase (%) | Video quality (dB) | Saving time (%) |
Kendo | 0.63 | -0.04 | -74.9 |
Balloons | 0.51 | -0.04 | -70.8 |
Newspaper | 0.96 | -0.05 | -63.7 |
Shark | 1.12 | -0.06 | -62.4 |
Undo_Dancer | 1.37 | -0.06 | -62.2 |
GT_Fly | 0.48 | -0.03 | -71.1 |
Poznan_Street | 0.42 | -0.03 | -77.6 |
Poznan_Hall2 | 0.21 | -0.02 | -79.2 |
Average | 0.62 | -0.03 | -70.2 |
The method extracts mode prediction characteristics for the 3D-HEVC encoder by utilizing coding information of inter-view, space-time and texture depth, and then optimizes the mode prediction modes of the current texture and depth treeblock so as to effectively save the coding time of the HTM, and meanwhile, the loss of RD performance can be ignored.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (5)
1. A fast and efficient 3D-HEVC method for complexity reduction, characterized by the steps of:
s1, starting the decision of the treeblock, and deriving the prediction variables on the corresponding treeblocks of space-time, previous coding views and texture depth;
s2, identifying the motion complexity of the adjacent tree block and the current tree block according to the predictive variables in the step S1, and dividing the current tree block into static and complex motion tree blocks;
s3, skipping the CU depth level of the texture treeblock and the depth treeblock according to the motion characteristics of the static or complex motion treeblock;
s4, calculating the rate distortion cost of the current tree block and the rate distortion cost value of the adjacent tree block according to the prediction variables in the step S1, and determining the minimum value of the rate distortion cost values of the adjacent tree blocks as a threshold THr of mode decision;
s5, determining whether the rate-distortion cost value of the current tree block in step S4 is less than the threshold THr, if so, skipping unnecessary mode decision, terminating the mode prediction in advance, and determining the best mode from the full mode.
2. A fast and efficient 3D-HEVC method for complexity reduction according to claim 1 wherein the predictor variables in step S1 correspond to coded predictors:
ψ={TS,TT,TI,TTD},
wherein, TSAs a spatial predictor, TTFor being located in the current texture treeblock TCTemporal predictor of same position, TIFor inter-view predictors, T, in neighboring coded viewsTDA texture depth predictor corresponding to the depth map view.
3. Fast and efficient 3D-HEVC method for complexity reduction according to claim 1 characterized in that the motion vectors of the current texture treeblock and the covering block of the corresponding treeblock are defined according to the motion complexity of the neighboring treeblocks and the current treeblock as: MV (Medium Voltage) data baseij=(MVij,MVyij) Then the motion complexity in the horizontal and vertical directions is defined as:
where ψ is the motion predictor, MCx is the horizontal motion complexity, MCy is the vertical motion complexity, and T is the current treeblock and itsTotal number of adjacent treeblocks, pijRepresenting a weight factor;
the motion complexity parameters are: MC ═ MCx + MCy;
from the motion complexity parameter MC, the current treeblock TcIt is divided into two types, static and complex motion treeblock:
where R is a threshold factor that determines treeblocks with static or complex motion.
4. Fast and efficient 3D-HEVC method for complexity reduction according to claim 1 characterized in that the threshold THr of the mode decision in step S4 is:
THr=μ·{RD costP1,RD costP2,RD costP3,RD costP4,RD costTpredict},
wherein μ is an adjustment parameter, P1、P2、P3、P4Are all spatial predictors, RD costP1、RD costP2、RD costP3、RD costP4Rate distortion cosT value, RD cosT, of neighboring treeblocks that are both current treeblockspredictA rate distortion cost value of a neighboring predictor for the current treeblock, and
where ψ is the motion predictor, i is the number of neighboring treeblocks, RD costi is the rate-distortion cost value of the neighboring treeblocks, αiRepresenting the treeblock weight parameter, ξiFor adjustment, χ, δ, ε, and γ are all mode weighting factors.
5. Fast and efficient 3D-HEVC method for complexity reduction according to claim 1 characterized in that the accuracy of the early termination in step S5 is:
ο=NE/NF,
wherein N isENumber of CU depth levels for early termination, NFThe number of CU depth levels.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911149001.1A CN111083495A (en) | 2019-11-21 | 2019-11-21 | Rapid and efficient 3D-HEVC (high efficiency video coding) method for reducing complexity |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911149001.1A CN111083495A (en) | 2019-11-21 | 2019-11-21 | Rapid and efficient 3D-HEVC (high efficiency video coding) method for reducing complexity |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111083495A true CN111083495A (en) | 2020-04-28 |
Family
ID=70311459
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911149001.1A Pending CN111083495A (en) | 2019-11-21 | 2019-11-21 | Rapid and efficient 3D-HEVC (high efficiency video coding) method for reducing complexity |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111083495A (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103338370A (en) * | 2013-06-05 | 2013-10-02 | 宁波大学 | Multi-view depth video fast coding method |
CN110381325A (en) * | 2019-08-20 | 2019-10-25 | 郑州轻工业学院 | The fast mode decision method of low complex degree depth coding based on 3D-HEVC |
CN110446052A (en) * | 2019-09-03 | 2019-11-12 | 南华大学 | The quick CU depth selection method of depth map in a kind of 3D-HEVC frame |
-
2019
- 2019-11-21 CN CN201911149001.1A patent/CN111083495A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103338370A (en) * | 2013-06-05 | 2013-10-02 | 宁波大学 | Multi-view depth video fast coding method |
CN110381325A (en) * | 2019-08-20 | 2019-10-25 | 郑州轻工业学院 | The fast mode decision method of low complex degree depth coding based on 3D-HEVC |
CN110446052A (en) * | 2019-09-03 | 2019-11-12 | 南华大学 | The quick CU depth selection method of depth map in a kind of 3D-HEVC frame |
Non-Patent Citations (5)
Title |
---|
LIQUAN SHEN, ET AL.: "An efficient CU size decision method for HEVC encoders", 《IEEE TRANSACTIONS ON MULTIMEDIA》 * |
QIAN ZHANG, ET AL.: "Fast mode decision based on gradient information in 3D-HEVC", 《IEEE ACCESS》 * |
QIUWEN ZHANG, ET AL.: "Adaptive early termination mode decision for 3D-HEVC using inter-view and spatio-temporal correlations", 《INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATIONS》 * |
QIUWEN ZHANG, ET AL.: "Efficient multiview video plus depth coding for 3D-HEVC based on complexity classification of the treeblock", 《J REAL-TIME IMAGE PROC》 * |
QIUWEN ZHANG, ET AL.: "Fast depth map mode decision based on depth-texture correlation and edge classification for 3D-HEVC", 《J. VIS. COMMUN. IMAGE R.》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5509390B2 (en) | Method and system for illumination compensation and transition for video encoding and processing | |
EP3389276B1 (en) | Hash-based encoder decisions for video coding | |
CN102917225B (en) | HEVC intraframe coding unit fast selecting method | |
CN109302610B (en) | Fast coding method for screen content coding interframe based on rate distortion cost | |
CN103517069A (en) | HEVC intra-frame prediction quick mode selection method based on texture analysis | |
CN112291562B (en) | Fast CU partition and intra mode decision method for H.266/VVC | |
US20060039476A1 (en) | Methods for efficient implementation of skip/direct modes in digital video compression algorithms | |
KR20140068013A (en) | Depth map encoding and decoding | |
US9667989B2 (en) | Moving picture coding device, moving picture coding method, and moving picture coding program, and moving picture decoding device, moving picture decoding method, and moving picture decoding program | |
US20240031576A1 (en) | Method and apparatus for video predictive coding | |
CN105208387A (en) | HEVC intra-frame prediction mode fast selection method | |
CN111492655A (en) | Texture-based partition decision for video compression | |
CN114286093A (en) | Rapid video coding method based on deep neural network | |
Kuang et al. | Fast mode decision algorithm for HEVC screen content intra coding | |
Li et al. | Self-learning residual model for fast intra CU size decision in 3D-HEVC | |
CN114222133B (en) | Content self-adaptive VVC intra-frame coding rapid dividing method based on classification | |
CN106878754B (en) | A kind of 3D video depth image method for choosing frame inner forecast mode | |
Xue et al. | Fast ROI-based HEVC coding for surveillance videos | |
Racapé et al. | Spatiotemporal texture synthesis and region-based motion compensation for video compression | |
CN111083495A (en) | Rapid and efficient 3D-HEVC (high efficiency video coding) method for reducing complexity | |
CN111246218B (en) | CU segmentation prediction and mode decision texture coding method based on JND model | |
CN116260968A (en) | Encoding method, apparatus, device and storage medium for reducing video encoding complexity | |
CN114827606A (en) | Quick decision-making method for coding unit division | |
CN111031303B (en) | 3D-HEVC (high efficiency video coding) rapid depth coding method based on Bayesian decision theorem | |
CN113079374B (en) | Image encoding method and apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200428 |
|
RJ01 | Rejection of invention patent application after publication |