US20130128982A1 - Method for generating prediction block in amvp mode - Google Patents
Method for generating prediction block in amvp mode Download PDFInfo
- Publication number
- US20130128982A1 US20130128982A1 US13/742,058 US201313742058A US2013128982A1 US 20130128982 A1 US20130128982 A1 US 20130128982A1 US 201313742058 A US201313742058 A US 201313742058A US 2013128982 A1 US2013128982 A1 US 2013128982A1
- Authority
- US
- United States
- Prior art keywords
- current
- amvp
- block
- motion vector
- candidate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H04N19/00587—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
- H04N19/126—Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/189—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
- H04N19/196—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
- H04N19/198—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters including smoothing of a sequence of encoding parameters, e.g. by averaging, by choice of the maximum, minimum or median value
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
Definitions
- the present invention relates to a method for generating a prediction block of an image that has been encoded in Advanced Motion Vector Prediction (AMVP) mode, and more particularly, to a method for decoding motion information encoded in AMVP mode and generating a prediction block based on the motion information.
- AMVP Advanced Motion Vector Prediction
- inter-prediction coding is one of the most effective video compression techniques, in which a block similar to a current block is extracted from a previous picture and the difference between the current block and the extracted block is encoded.
- motion information about each block should be additionally transmitted in the inter-prediction coding scheme, with a coded residual block. Therefore, effective coding of motion information that reduces the amount of data is another video compression technique.
- a block best matching to a current block is searched in a predetermined search range of a reference picture using a predetermined evaluation function. Once the best matching block is searched in the reference picture, only the residue between the current block and the best matching block is transmitted, thereby increasing a data compression rate.
- the motion vector information is encoded and inserted into a bit stream during coding. If the motion vector information is simply encoded and inserted, overhead is increased, thereby decreasing the compression rate of video data.
- the motion vector of the current block is predicted using neighboring blocks and only the difference between a motion vector predictor resulting from the prediction and the original motion vector is encoded and transmitted, thereby compressing the motion vector information in the inter-prediction coding scheme.
- the motion vector predictor of the current block is determined to be a median value (mvA, mvB, mvC). Since the neighboring blocks are likely to be similar to one another, the median value of the motion vectors of the neighboring blocks is determined to be the motion vector of the current block.
- the median value does not predict the motion vector of the current block effectively.
- prediction blocks are larger in size and diversified, the number of reference pictures is increased.
- the data amount of a residual block is reduced but the amount of motion information to be transmitted (a motion vector and a reference picture index) is increased.
- An object of the present invention devised to solve the problem lies on a method for generating a prediction block by effectively reconstructing motion information encoded in AMVP mode.
- the object of the present invention can be achieved by providing a method for generating a prediction block in AMVP mode, including reconstructing a reference picture index and a differential motion vector of a current Prediction Unit (PU), searching an effective spatial AMVP candidate for the current PU, searching an effective temporal AMVP candidate for the current PU, generating an AMVP candidate list using the effective spatial and temporal AMVP candidates, adding a motion vector having a predetermined value as a candidate to the AMVP candidate list, when the number of the effective AMVP candidates is smaller than a predetermined number, determining a motion vector corresponding to an AMVP index of the current PU from among motion vectors included in the AMVP candidate list to be a motion vector predictor of the current PU, reconstructing a motion vector of the current PU using the differential motion vector and the motion vector predictor, and generating a prediction block corresponding to a position indicated by the reconstructed motion vector within a reference picture indicated by the reference picture index.
- PU Prediction Unit
- a reference picture index and a differential motion vector of a current prediction unit are reconstructed and an AMVP candidate list is made using effective spatial and temporal AMVP candidates of the current prediction unit. If the number of the effective AMVP candidates is smaller than a predetermined number, a motion vector having a predetermined value is added to the AMVP candidate list. Then, a motion vector corresponding to an AMVP index of the current prediction unit is selected as a motion vector predictor of the current prediction unit from among motion vectors included in the AMVP candidate list. A motion vector of the current prediction unit is reconstructed using the differential motion vector and the motion vector predictor and a prediction block corresponding to a position indicated by the reconstructed motion vector in a reference picture indicated by the reference picture index is generated.
- FIG. 1 is a block diagram of a video encoder according to an embodiment of the present invention
- FIG. 2 is a flowchart illustrating an inter-prediction coding operation according to an embodiment of the present invention
- FIG. 3 is a flowchart illustrating a merge coding operation according to an embodiment of the present invention
- FIG. 4 illustrates the positions of merge candidates according to an embodiment of the present invention
- FIG. 5 illustrates the positions of merge candidates according to another embodiment of the present invention.
- FIG. 6 is a flowchart illustrating an AMVP coding operation according to an embodiment of the present invention.
- FIG. 7 is a block diagram of a video decoder according to an embodiment of the present invention.
- FIG. 8 is a flowchart illustrating an inter-prediction decoding operation according to an embodiment of the present invention.
- FIG. 9 is a flowchart illustrating a merge-mode motion vector decoding operation according to an embodiment of the present invention.
- FIG. 10 is a flowchart illustrating a merge-mode motion vector decoding operation according to another embodiment of the present invention.
- FIG. 11 is a flowchart illustrating an AMVP-mode motion vector decoding operation according to an embodiment of the present invention.
- FIG. 12 is a flowchart illustrating an AMVP-mode motion vector decoding operation according to another embodiment of the present invention.
- FIG. 1 is a block diagram of a video encoder according to an embodiment of the present invention.
- a video encoder 100 includes a picture divider 110 , a transformer 120 , a quantizer 130 , a scanner 131 , an entropy encoder 140 , an intra-predictor 150 , an inter-predictor 160 , an inverse quantizer 135 , an inverse transformer 125 , a post-processor 170 , a picture storage 180 , a subtractor 190 , and an adder 195 .
- the picture divider 110 partitions every Largest Coding Unit (LCU) of a picture into CUs each having a predetermined size by analyzing an input video signal, determines a prediction mode, and determines a size of a Prediction Unit (PU) for each CU.
- the picture divider 110 provides a PU to be encoded to the intra-predictor 150 or the inter-predictor 160 according to a prediction mode (or prediction method).
- the transformer 120 transforms a residual block which indicates a residual signal between the original block of an input PU and a prediction block generated from the intra-predictor 150 or the inter-predictor 160 .
- the residual block is composed of CU or PU.
- the residual block is divided into optimum transform units and then transformed.
- a transform matrix may be differently determined based on a prediction mode (i.e. inter-prediction mode or intra-prediction mode). Because an intra-prediction residual signal includes directionality corresponding to the intra-prediction mode, a transform matrix may be determined for the intra-prediction residual signal adaptively according to the intra-prediction mode.
- Transform units may be transformed by two (horizontal and vertical) one-dimensional transform matrices.
- a predetermined single transform matrix is determined for inter-prediction.
- the intra-prediction mode if the intra-prediction mode is horizontal, the residual block is likely to be directional horizontally and thus a Discrete Cosine Transform (DCT)-based integer matrix and a Discrete Sine Transform (DST)-based or Karhunen-Loeve Transform (KLT)-based integer matrix are respectively applied vertically and horizontally.
- DCT Discrete Cosine Transform
- DST Discrete Sine Transform
- KLT Karhunen-Loeve Transform
- a DCT-based integer matrix is applied vertically and horizontally.
- DC mode a DCT-based integer matrix is applied in both directions.
- a transform matrix may be determined adaptively according to the size of a transform unit.
- the quantizer 130 determines a quantization step size to quantize the coefficients of the residual block transformed using the transform matrix.
- the quantization step size is determined for each CU of a predetermined size or larger (hereinafter, referred to as a quantization unit).
- the predetermined size may be 8 ⁇ 8 or 16 ⁇ 16.
- the coefficients of the transformed block are quantized using the determined quantization step size and the quantization matrix determined according to the prediction mode.
- the quantizer 130 uses the quantization step size of a quantization unit adjacent to a current quantization unit as a quantization step size predictor of the current quantization unit.
- the quantizer 130 may generate the quantization step size predictor of the current quantization unit using one or two effective quantization step sizes resulting from sequential search of left, upper, and top-left quantization units adjacent to the current quantization unit. For example, the first one of effective quantization step sizes detected by searching the left, upper, and top-left quantization units in this order may be determined to be the quantization step size predictor. In addition, the average of the two effective quantization step sizes may be determined to be the quantization step size predictor. If only one quantization step size is effective, it may be determined to be the quantization step size predictor. Once the quantization step size predictor is determined, the difference between the quantization step size of the current CU and the quantization step size predictor is transmitted to the entropy encoder 140 .
- All of the left, upper, and top-left CUs adjacent to the current CU may not exist. However, there may be a previous CU in an LCU according to a coding order. Therefore, the quantization step sizes of the adjacent quantization units of the current CU and the quantization step size of the quantization unit previously encoded in the coding order within the LCU may be candidates. In this case, 1) the left quantization unit of the current CU, 2) the upper quantization unit of the current CU, 3) the top-left quantization unit of the current CU, and 4) the previously encoded quantization unit may be prioritized in a descending order. The order of priority levels may be changed and the top-left quantization unit may be omitted.
- the quantized transformed block is provided to the inverse quantizer 135 and the scanner 131 .
- the scanner 131 converts the coefficients of the quantized transformed block to one-dimensional quantization coefficients by scanning the coefficients of the quantized transformed block. Since the coefficient distribution of the transformed block may be dependent on the intra-prediction mode after quantization, a scanning scheme is determined according to the intra-prediction mode. In addition, the coefficient scanning scheme may vary with the size of a transform unit. A scan pattern may be different according to a directional intra-prediction mode. The quantized coefficients are scanned in a reverse order.
- the same scan pattern applies to the quantization coefficients of each subset.
- a zigzag or diagonal scan pattern applies between subsets.
- scanning from a main subset including a DC to the remaining subsets in a forward direction is preferable, scanning in a reverse direction is also possible.
- the inter-subset scan pattern may be set to be identical to the intra-subset scan pattern. In this case, the inter-subset scan pattern is determined according to an intra-prediction mode.
- the video encoder transmits information indicating the position of a last non-zero quantized coefficient in the transform unit to a video decoder.
- the video encoder may also transmit information indicating the position of a last non-zero quantized coefficient in each subset to the decoder.
- the inverse quantizer 135 dequantizes the quantized coefficients.
- the inverse transformer 125 reconstructs a spatial-domain residual block from the inverse-quantized transformed coefficients.
- the adder generates a reconstructed block by adding the residual block reconstructed by the inverse transformer 125 to a prediction block received from the intra-predictor 150 or the inter-predictor 160 .
- the post-processor 170 performs deblocking filtering to eliminate blocking artifact from a reconstructed picture, adaptive offset application to compensate for a difference from the original picture on a pixel basis, and adaptive loop filtering to compensate for a difference from the original picture on a CU basis.
- Deblocking filtering is preferably applied to the boundary between a PU and a transform unit which are of a predetermined size or larger.
- the size may be 8 ⁇ 8.
- the deblocking filtering process includes determining a boundary to be filtered, determining a boundary filtering strength to apply to the boundary, determining whether to apply a deblocking filter, and selecting a filter to apply to the boundary when determined to apply the deblocking filter.
- Adaptive offset application is intended to reduce the difference (i.e. distortion) between pixels in a deblocking-filtered picture and original pixels. It may be determined whether to perform the adaptive offset applying process on a picture basis or on a slice basis. A picture or slice may be divided into a plurality of offset areas and an offset type may be determined per offset area. There may be a predetermined number of (e.g. 4) edge offset types and two band offset types.
- an edge offset type the edge type of each pixel is determined and an offset corresponding to the edge type is applied to the pixel.
- the edge type is determined based on the distribution of two pixel values adjacent to a current pixel.
- Adaptive loop filtering may be performed based on a comparison value between an original picture and a reconstructed picture that has been subjected to deblocking filtering or adaptive offset application. Adaptive loop filtering may apply across all pixels included in a 4 ⁇ 4 or 8 ⁇ 8 block. It may be determined for each CU whether to apply adaptive loop filtering. The size and coefficient of a loop filter may be different for each CU.
- Information indicating whether an adaptive loop filter is used for each CU may be included in each slice header. In case of a chrominance signal, the determination may be made on a picture basis. Unlike luminance, the loop filter may be rectangular.
- a determination as to whether to use adaptive loop filtering may be made on a slice basis. Therefore, information indicating whether the adaptive loop filtering is used for a current slice is included in a slice header or a picture header. If the information indicates that the adaptive loop filtering is used for the current slice, the slice header or picture header further may incude information indicating the horizontal and/or vertical filter length of a luminance component used in the adaptive loop filtering.
- the slice header or picture header may include information indicating the number of filter sets. If the number of filter sets is 2 or larger, filter coefficients may be encoded in a prediction scheme. Accordingly, the slice header or picture header may include information indicating whether filter coefficients are encoded in a prediction scheme. If the prediction scheme is used, predicted filter coefficients are included in the slice header or picture header.
- chrominance components as well as luminance components may be filtered adaptively. Therefore, information indicating whether each chrominance component is filtered or not may be included in the slice header or picture header. In this case, information indicating whether chrominance components Cr and Cb are filtered may be jointly encoded (i.e. multiplexed coding) to thereby reduce the number of bits. Both chrominance components Cr and Cb are not filtered in many cases to thereby reduce complexity. Thus, if both chrominance components Cr and Cb are not filtered, a lowest index is assigned and entropy-encoded. If both chrominance components Or and Cb are filtered, a highest index is assigned and entropy-encoded.
- the picture storage 180 receives post-processed image data from the post-processor 170 , reconstructs and stores an image on a picture basis.
- a picture may be an image in a frame or field.
- the picture storage 180 includes a buffer (not shown) for storing a plurality of pictures.
- the inter-predictor 160 estimates a motion using at least one reference picture stored in the picture storage 180 and determines a reference picture index identifying the reference picture and a motion vector.
- the inter-predictor 160 extracts and outputs a prediction block corresponding to a PU to be encoded from the reference picture used for motion estimation among the plurality of reference pictures stored in the picture storage 180 , according to the determined reference picture index and motion vector.
- the intra-predictor 150 performs intra-prediction coding using reconfigured pixel values of a picture including the current PU.
- the intra-predictor 150 receives the current PU to be prediction-encoded, selects one of a predetermined number of intra-prediction modes according to the size of the current block, and performs intra-prediction in the selected intra-prediction mode.
- the intra-predictor 150 adaptively filters reference pixels to generate an intra-prediction block. If the reference pixels are not available, the intra-predictor 150 may generate reference pixels using available reference pixels.
- the entropy encoder 140 entropy-encodes the quantized coefficients received from the quantizer 130 , intra-prediction information received from the intra-predictor 150 , and motion information received from the inter-predictor 160 .
- FIG. 2 is a flowchart illustrating an inter-prediction coding operation according to an embodiment of the present invention.
- the inter-prediction coding operation includes determining motion information of a current PU, generating a prediction block, generating a residual block, encoding the residual block, and encoding the motion information.
- a PU and a block will be used interchangeably.
- the motion information of the current PU includes a reference picture index to be referred to for the current PU and a motion vector.
- one of one or more reconstructed reference pictures is determined to be a reference picture for the current PU and motion information indicating the position of the prediction block in the reference picture is determined.
- the reference picture index for the current block may be different according to the inter-prediction mode of the current block. For example, if the current block is in a single-directional prediction mode, the reference picture index indicates one of reference pictures listed in List 0 (L 0 ). On the other hand, if the current block is in a bi-directional prediction mode, the motion information may include reference picture indexes indicating one of reference pictures listed in LO and one of reference pictures listed in List 1 (L 1 ). In addition, if the current block is in a bi-directional prediction mode, the motion information may include a reference picture index indicating one or two of reference pictures included in a List Combination (LC) being a combination of L 0 and L 1 .
- LC List Combination
- the motion vector indicates the position of the prediction block in a picture indicated by the reference picture index.
- the motion vector may have an integer-pixel resolution or a 1 ⁇ 8 or 1/16 pixel resolution. If the motion vector does not have an integer-pixel resolution, the prediction block is generated from integer pixels.
- a prediction block of the current PU is generated by copying a corresponding block at the position indicated by the motion vector in the picture indicated by the reference picture index.
- the pixels of a prediction block are generated using integer pixels in the picture indicated by the reference picture index.
- prediction pixels may be generated using an 8-tap interpolation filter.
- prediction pixels may be generated using a 4-tap interpolation filter.
- a residual block is generated based on a difference between the current PU and the prediction block.
- the size of the residual block may be different from the size of the current PU. For example, if the current PU is of size 2N ⁇ 2N, the current PU and the residual block are of the same size. However, if the current PU is of size 2N ⁇ N or N ⁇ 2N, the residual block may be a 2N ⁇ 2N block. That is, when the current PU is a 2N ⁇ N block, the residual block may be configured by combining two 2N ⁇ N residual blocks.
- a 2N ⁇ 2N prediction block is generated by overlap-smoothing boundary pixels and then a residual block is generated using the difference between the 2N ⁇ 2N original block (two current blocks) and the 2N ⁇ 2N prediction block.
- the residual block is encoded in units of a transform coding size. That is, the residual block is subjected to transform encoding, quantization, and entropy encoding in units of a transform coding size.
- the transform coding size may be determined in a quad-tree scheme according to the size of the residual block. That is, transform coding uses integer-based DCT.
- the transform-encoded block is quantized using a quantization matrix.
- the quantized matrix is entropy-encoded by Context-Adaptive Binary Arithmetic Coding (CABAC) or Context-Adaptive Variable-Length Coding (CAVLC).
- CABAC Context-Adaptive Binary Arithmetic Coding
- CAVLC Context-Adaptive Variable-Length Coding
- the motion information of the current PU is encoded using motion information of PUs adjacent to the current PU.
- the motion information of the current PU is subjected to merge coding or AMVP coding. Therefore, it is determined whether to encode the motion information of the current PU by merge coding or AMVP coding and encodes the motion information of the current PU according to the determined coding scheme.
- spatial merge candidates and temporal merge candidates are derived (S 210 and S 220 ).
- the spatial merge candidates are first derived and then the temporal merge candidates are derived, by way of example.
- the present invention is not limited to the order of deriving the spatial and temporal merge candidates.
- the temporal merge candidates first derived and then the spatial merge candidates may be derived, or the spatial and temporal merge candidates may be derived in parallel.
- Spatial merge candidates may be configured in one of the following embodiments.
- Spatial merge candidate configuration information may be transmitted to the video decoder.
- the spatial merge candidate configuration information may indicate one of the following embodiments or information indicating the number of merge candidates in one of the following embodiments.
- Embodiment 1 (Spatial Merge Candidate Configuration 1)
- a plurality of spatial merge candidates may be a left PU (block A), an upper PU (block B), a top-right PU (block C), and a bottom-left PU (block D) adjacent to the current PU.
- all of the effective PUs may be candidates or two effective PUs may be selected as candidates by scanning the blocks A to D in the order of A, B, C and D. If there are a plurality of PUs to the left of the current PU, an effective uppermost PU or a largest effective PU may be determined as the left
- an effective leftmost PU or a largest effective PU may be determined as the upper PU adjacent to the current PU from among the plurality of upper PUs.
- Embodiment 2 (Spatial Merge Candidate Configuration 2)
- a plurality of spatial merge candidates may be two effective PUs selected from among a left PU (block A), an upper PU (block B), a top-right PU (block C), a bottom-left PU (block D), and a top-left PU (block E) adjacent to the current PU by scanning the blocks A to E in the order of A, B, C, D and E.
- the left PU may be adjacent to the block E, not to the block D.
- the upper PU may be adjacent to the block E, not to the block C.
- Embodiment 3 (Spatial Merge Candidate Configuration 3)
- the left block (the block A), the upper block (the block B), the top-right block (the block C), the bottom-left block (the block D), and the top-left block (the block E) adjacent to the current PU may be candidates in this order, if they are effective.
- the block E is available if one or more of the blocks A to D are not effective.
- Embodiment 4 (Spatial Merge Candidate Configuration 4)
- a plurality of spatial merge candidates may include the left PU (the block A), the upper PU (the block B), and a corner PU (one of the blocks C, D and E) adjacent to the current PU.
- the corner PU is a first effective one of the top-right PU (the block C), bottom-left PU (the block D), and top-left PU (the block E) of the current PU by scanning them in the order of C, D and E.
- motion information of spatial merge candidates above the current PU may be set differently according to the position of the current PU. For example, if the current PU is at the upper boundary of an LCU, motion information of an upper PU (block B, C or E) adjacent to the current PU may be its own motion information or motion information of an adjacent PU.
- the motion information of the upper PU may be determined as one of its own motion information or motion information (a reference picture index and a motion vector) of an adjacent PU, according to the size and position of the current PU.
- a reference picture index and a motion vector of a temporal merge candidate are obtained in an additional process.
- the reference picture index of the temporal merge candidate may be obtained using the reference picture index of one of PUs spatially adjacent to the current PU.
- Reference picture indexes of temporal merge candidates for the current PU may be obtained using the whole or a part of reference picture indexes of the left PU (the block A), the upper PU (the block B), the top-right PU (the block C), the bottom-left PU (the block D), and the top-left PU (the block E) adjacent to the current PU.
- the reference picture indexes of the left PU (the block A), the upper PU (the block B), and a corner block (one of the blocks C, D and E) adjacent to the current PU may be used.
- the reference picture indexes of an odd number of (e.g. 3) effective PUs may be used from among the reference picture indexes of the left PU (the block A), upper PU (the block B), top-right PU (the block C), bottom-left PU (the block D), and top-left PU (the block E) adjacent to the current PU by scanning them in the order of A, B, C, D and E.
- the reference picture index of the left PU (hereinafter, referred to as the left reference picture index), the reference picture index of the upper PU (hereinafter, referred to as the upper reference picture index), and the reference picture index of the corner PU (hereinafter, referred to as the corner reference picture index), adjacent to the current PU, are obtained. While only one of the corner PUs C, D and E is taken as a candidate, to which the present invention is not limited, it may be further contemplated in an alternative embodiment that the PUs C and D are set as candidates (thus four candidates) or the PUs C, D and E are all set as candidates (thus five candidates).
- reference picture index 0 may be set as the reference picture index of a temporal merge candidate.
- a reference picture index that is most frequently used from among the reference picture indexes may be set as the reference picture index of a temporal merge candidate.
- a reference picture index having a minimum value among the plurality of reference picture indexes or the reference picture index of a left or upper block may be set as the reference picture index of a temporal merge candidate.
- a picture including the temporal merge candidate block (hereinafter, referred to as a temporal merge candidate picture) is determined.
- the temporal merge candidate picture may be set to a picture with reference picture index O.
- the slice type is P
- the first picture i.e. a picture with index 0
- the slice type is B
- the first picture of a reference picture list indicated by a flag that indicates a temporal merge candidate list in a slice header is set as a temporal merge candidate picture. For example, if the flag is 1, a temporal merge candidate picture may be selected from list 0 and if the flag is 0, a temporal merge candidate picture may be selected from list 1 .
- a temporal merge candidate block is obtained from the temporal merge candidate picture.
- One of a plurality of blocks corresponding to the current PU within the temporal merge candidate picture may be determined as the temporal merge candidate block.
- the plurality of blocks corresponding to the current PU are prioritized and a first effective corresponding block is selected as the temporal merge candidate block according to the priority levels.
- a bottom-left corner block adjacent to a block corresponding to the current PU within the temporal merge candidate picture or a bottom-left block included in the block corresponding to the current PU within the temporal merge candidate picture may be set as a first candidate block.
- a block including a top-left pixel or a block including a bottom-right pixel, at the center of the block corresponding to the current PU within the temporal merge candidate picture may be set as a second candidate block.
- the first candidate block is set as the temporal merge candidate block.
- the second candidate block is set as the temporal merge candidate block.
- only the second candidate block may be used according to the position of the current PU.
- the current PU may be located in a slice or an LCU.
- the motion vector of the temporal merge candidate prediction block is set as a temporal merge candidate motion vector.
- the temporal merge candidate may be adaptively off according to the size of the current PU. For example, if the current PU is a 4 ⁇ 4 block, the temporal merge candidate may be off to reduce complexity.
- the merge candidate list is generated using the effective merge candidates in a predetermined order. If a plurality of merge candidates have the same motion information (i.e. the same motion vector and the same reference picture index), a lower-ranked merge candidate is deleted from the merge candidate list.
- the predetermined order may be A, B, Col, C, and D in Embodiment 1 (spatial merge candidate configuration 1).
- Col represents a temporal merge candidate.
- the merge candidate list may be generated in the order of two effective PUs and Col, the two effective PUs being determined by scanning the blocks A, B,
- the predetermined order may be A, B, Col, C, D. If at least one of the blocks A, B, C and D is not effective, the block E may be added. In this case, the block E may be added at the lowest rank.
- the merge candidate list may be generated in the order of (one of A and D), (one of C, B and E), and Col.
- the predetermined order may be A, B, Col, Corner, or A, B, Corner, Col.
- the number of merge candidates may be determined on a slice or LCU basis.
- the merge candidate list is generated in a predetermined order in the above embodiments.
- merge candidates are generated (S 250 ).
- the generated merge candidates are added to the merge candidate list. In this case, the generated merge candidates are added below the lowest ranked merge candidate in the merge candidate list. If a plurality of merge candidates are added, they are added in a predetermined order.
- the added merge candidate may be a candidate with motion vector 0 and reference picture index 0 (a first added merge candidate).
- the added merge candidate may be a candidate generated by combining the motion information of effective merge candidates (a second added merge candidate).
- a candidate may be generated by combining the motion information (the reference picture index) of a temporal merge candidate with the motion information (motion vector) of an effective spatial merge candidate and then added to the merge candidate list.
- Merge candidates may be added in the order of the first and second added merge candidates or in the reverse order.
- the steps S 240 and S 250 may be omitted.
- a merge candidate is determined as a merge predictor of the current PU, from the generated merge candidate list (S 260 ).
- the index of the merge predictor i.e. the merge index
- the merge index is encoded (S 270 ).
- the merge index is encoded.
- the merge index may be encoded by fixed-length coding or CAVLC. If CAVLC is adopted, the merge index for codeword mapping may be adjusted according to a PU shape and a PU index.
- the number of merge candidates may be variable.
- a codeword corresponding to the merge index is selected using a table that is determined according to the number of effective merge candidates.
- the number of merge candidates may be fixed. In this case, a codeword corresponding to the merge index is selected using a single table corresponding to the number of merge candidates.
- a spatial AMVP candidate and a temporal AMVP candidate are derived (S 310 and S 320 ).
- spatial AMVP candidates may include one (a left candidate) of the left PU (the block A) and bottom-left PU (the block D) adjacent to the current PU and one (an upper candidate) of the right PU (the block B), top-right PU (the block C), and top-left PU (the block E) adjacent to the current PU.
- the motion vector of a first effective PU is selected as the left or upper candidate by scanning PUs in a predetermined order.
- the left PUs may be scanned in the order of A and D or in the order of D and A.
- the upper PUs may be scanned in the order of B, C and E or in the order of C, B and E.
- the spatial AMVP candidates may be two effective PUs selected from the left PU (the block A), upper PU (the block B), top-right PU (the block C), and bottom-left PU (the block D) adjacent to the current PU by scanning them in the order of A, B, C and D.
- all of effective PUs may be candidates or two effective PUs obtained by scanning the blocks A, B, C and D in this order may be candidates. If there are a plurality of PUs to the left of the current PU, an effective uppermost PU or an effective PU having a largest area may be set as the left PU. Similarly, if there are a plurality of PUs above the current PU, an effective leftmost PU or an effective PU having a largest area may be set as the upper PU.
- spatial AMVP candidates may include two effective PUs obtained by scanning the left PU (the block A), right PU (the block B), top-right PU (the block C), bottom-left PU (the block D), and top-left PU (the block E) adjacent to the current PU in this order.
- the left PU may be adjacent to the block E, not to the block D.
- the upper PU may be adjacent to the block E, not to the block C.
- spatial AMVP candidates may be four blocks selected from among the left PU (the block A), upper PU (the block B), top-right PU (the block C), bottom-left PU (the block D), and top-left PU (the block E) adjacent to the current PU.
- the block E may be available when one or more of blocks A to D are not effective.
- spatial AMVP candidates may include the left PU (the block A), upper PU (the block B), and a corner PU (one of the blocks C, D and E) adjacent to the current PU.
- the corner PU is a first effective one of the top-right PU (the block C), bottom-left PU (the block D), and top-left PU (block E) of the current PU by scanning them in the order of C, D and E.
- motion information of AMVP candidates above the current PU may be set differently according to the position of the current PU.
- the motion vector of an upper PU (the block B, C or E) adjacent to the current PU may be its own motion vector or the motion vector of an adjacent PU.
- the motion vector of the upper PU may be determined as its own motion vector or the motion vector of an adjacent PU according to the size and position of the current PU.
- a temporal AMVP candidate needs only motion information, there is no need for obtaining a reference picture index, compared to a merge candidate. An operation for obtaining the motion vector of a temporal AMVP candidate will first be described.
- a picture including the temporal AMVP candidate block (hereinafter, referred to as a temporal AMVP candidate picture) is determined.
- the temporal AMVP candidate picture may be set to a picture with reference picture index 0 .
- the slice type is P
- the first picture i.e. a picture with index 0
- the slice type is B
- the first picture of a reference picture list indicated by a flag that indicates a temporal AVMP candidate list in a slice header is set as a temporal AVMP candidate picture.
- a temporal AMVP candidate block is obtained from the temporal AMVP candidate picture. This is performed in the same manner as the operation for obtaining a temporal merge candidate block and thus its description will not be provided herein.
- the temporal AMVP candidate may be adaptively off according to the size of the current PU. For example, if the current PU is a 4 ⁇ 4 block, the temporal AMVP candidate may be off to reduce complexity.
- an AMVP candidate list is generated (S 330 ).
- the AMVP candidate list is generated using effective AMVP candidates in a predetermined order. If a plurality of AMVP candidates have the same motion information (i.e. it is not necessary that the reference pictures are identical), lower-ranked AMVP candidates are deleted from the AMVP candidate list.
- the predetermined order is one of A and D (the order of A and D or the order of D and A), one of B, C and E (the order of B, C and E or the order of C, B and E), and Col, or Col, one of A and D, and one of B, C and E.
- Col represents a temporal AMVP candidate.
- the predetermined order is A, B, Col, C, D or C, D, Col, A, B.
- the predetermined order is (two effective ones of A, B, C, D and E in this order) and Col or Col and (two effective ones of A, B, C, D and E in this order).
- the predetermined order is A, B, Col, C, and D. If at least one of the blocks A, B, C and D is not effective, the block E may be added at the lowest rank.
- the predetermined order is A, B, Col, and corner.
- AMVP candidates are generated (S 350 ).
- the fixed value may be 2 or 3.
- the generated AMVP candidates are added below the lowest-ranked AMVP candidate in the AMVP candidate list.
- the added AMVP candidate may be a candidate with motion vector 0 .
- the steps S 340 and S 350 may be omitted.
- a motion vector predictor of the current PU is selected from the AMVP candidate list (S 360 ).
- An AMVP index indicating the predictor is generated.
- the reference picture index of the current PU, the differential motion vector, and the AMVP index are encoded (S 380 ).
- the AMVP index may be omitted.
- the AMVP index may be encoded by fixed-length coding or CAVLC. If CAVLC is adopted, the AMVP index for codeword mapping may be adjusted according to a PU shape and a PU index.
- the number of AMVP candidates may be variable.
- a codeword corresponding to the AMVP index is selected using a table determined according to the number of effective AMVP candidates.
- the merge candidate block may be identical to the AMVP candidate block.
- the AMVP candidate configuration is identical to the merge candidate configuration.
- encoder complexity can be reduced.
- FIG. 7 is a block diagram of a video decoder according to an embodiment of the present invention.
- the video decoder of the present invention includes an entropy decoder 210 , an inverse quantizer/inverse transformer 220 , an adder 270 , a deblocking filter 250 , a picture storage 260 , an intra-predictor 230 , a motion compensation predictor 240 , and an intra/inter switch 280 .
- the entropy decoder 210 separates an intra-prediction mode index, motion information, and a quantized coefficient sequence from a coded bit stream received from the video encoder by decoding the coded bit stream.
- the entropy decoder 210 provides the decoded motion information to the motion compensation predictor 240 , the intra-prediction mode index to the intra-predictor 230 and the inverse quantizer/inverse transformer 220 , and the quantized coefficient sequence to the inverse quantizer/inverse transformer 220 .
- the inverse quantizer/inverse transformer 220 converts the quantized coefficient sequence to a two-dimensional array of dequantized coefficients. For the conversion, one of a plurality of scan patterns is selected based on at least one of the prediction mode (i.e. one of intra-prediction and inter-prediction) and intra-prediction mode of the current block.
- the intra-prediction mode is received from the intra-predictor 230 or the entropy decoder 210 .
- the inverse quantizer/inverse transformer 220 reconstructs quantized coefficients from the two-dimensional array of dequantized coefficients using a quantization matrix selected from among a plurality of quantization matrices. Even for blocks having the same size, the inverse quantizer/inverse transformer 220 selects a quantization matrix based on at least one of the prediction mode and intra-prediction mode of a current block. Then a residual block is reconstructed by inversely transforming the reconstructed quantized coefficients.
- the adder 270 adds the reconstructed residual block received from the inverse quantizer/inverse transformer 220 to a prediction block generated from the intra-predictor 230 or the motion compensation predictor 240 , thereby reconstructing an image block.
- the deblocking filter 250 performs a deblocking filtering for the reconstructed image generated by the adder 270 .
- deblocking artifact caused by image loss during quantization may be reduced.
- the picture storage 260 includes a frame memory that preserves a local decoded image that has been deblocking-filtered by the deblocking filter 250 .
- the intra-predictor 230 determines the intra-prediction mode of the current block based on the intra-prediction mode index received from the entropy decoder 210 and generates a prediction block according to the determined intra-prediction mode.
- the motion compensation predictor 240 generates a prediction block of the current block from a picture stored in the picture storage 260 based on the motion vector information. If motion compensation with fractional-pel accuracy is applied, the prediction block is generated using a selected interpolation filter.
- the intra/inter switch 280 provides one of the prediction block generated from the intra-predictor 230 and the prediction block generated from the motion compensation predictor 240 to the adder 270 .
- FIG. 8 is a flowchart illustrating an inter-prediction decoding operation according to an embodiment of the present invention.
- the video decoder may check whether a current PU to be decoded has been encoded in SKIP mode (S 405 ). The check may be made based on skip_flag of a CU.
- the motion information of the current PU is decoded according to a Smotion information decoding process corresponding to the SKIP mode (S 410 ).
- the motion information decoding process corresponding to the SKIP mode is the same as a motion information decoding process corresponding to a merge mode.
- a corresponding block within a reference picture, indicated by the decoded motion information of the current PU is copied, thereby generating a reconstructed block of the current PU (S 415 ).
- the motion information of the current PU is decoded in the motion information decoding process corresponding to the merge mode (S 425 ).
- a prediction block is generated using the decoded motion information of the current PU (S 430 ).
- a residual block is decoded (S 435 ).
- a reconstructed block of the current PU is generated using the prediction block and the residual block (S 440 ).
- the motion information of the current PU is decoded in a motion information decoding process corresponding to an AMVP mode (S 445 ).
- a prediction block is generated using the decoded motion information of the current PU (S 450 ) and the residual block is decoded (S 455 ).
- a reconstructed block is generated using the prediction block and the residual block (S 460 ).
- the motion information decoding process is different depending on the coding pattern of the motion information of the current PU.
- the coding pattern of the motion information of the current PU may be one of merge mode and AMVP mode.
- SKIP mode the same motion information decoding process as in the merge mode is performed.
- FIG. 9 is a flowchart illustrating a motion vector decoding operation, when the number of merge candidates is variable.
- an effective merge candidate is searched, determining that there is a single merge candidate for the current PU (S 520 ).
- Merge candidate configurations and merge candidate search orders i.e. listing orders
- the motion information of the current PU is generated using the motion information of the merge candidate (S 530 ). That is, the reference picture index and motion vector of the merge candidate are set as the reference picture index and motion vector of the current PU.
- a merge candidate list is comprised of the effective merge candidates (S 540 ). Methods for configuring merge candidate and generating a merge candidate list have been described before with reference to FIG. 3 .
- a VLC table corresponding to the number of merge candidates is selected (S 550 ).
- a merge index corresponding to the merge codeword is reconstructed (S 560 ).
- a merge candidate corresponding to the merge index is selected from the merge candidate list and the motion information of the merge candidate is set as the motion information of the current PU (S 570 ).
- FIG. 10 is a flowchart illustrating a motion vector decoding operation, when the number of merge candidates is fixed.
- the number of merge candidates may be fixed on a picture or slice basis.
- Merge candidates include a spatial merge candidate and a temporal merge candidate.
- the positions of spatial merge candidates, the method for deriving the spatial merge candidates, the positions of temporal merge candidates and the method for deriving he temporal merge candidates have been described before with reference to FIG. 3 .
- the temporal merge candidate may not be used.
- the merge candidate may be omitted for a 4 ⁇ 4 PU.
- a merge candidate is generated (S 630 ).
- the merge candidate may be generated by combining the motion information of effective merge candidates.
- a merge candidate with motion vector 0 and reference picture index 0 may be added. Merge candidates are added in a predetermined order.
- a merge list is made using the merge candidates
- This step may be performed in combination with the steps S 620 and S 630 .
- the merge candidate configurations and the merge candidate search orders i.e. listing orders
- a merge index corresponding to a merge codeword in a received bit stream is reconstructed (S 650 ). Since the number of merge candidates is fixed, the merge index corresponding to the merge codeword may be obtained from one decoding table corresponding to the number of merge candidates. However, a different decoding table may be used depending on whether a temporal merge candidate is used.
- a candidate corresponding to the merge index is searched from the merge list (S 660 ).
- the searched merge candidate is determined to be a merge predictor.
- the motion information of the current PU is generated using the motion information of the merge predictor (S 670 ). Specifically, the motion information of the merge predictor, i.e. the reference picture index and motion vector of the merge predictor are determined to be the reference picture index and motion vector of the current PU.
- FIG. 11 is a flowchart illustrating a motion vector decoding operation, when the number of AMVP candidates is variable.
- the reference picture index and differential motion vector of a current PU are parsed (S 710 ).
- an effective AMVP candidate is searched, determining that the number of AMVP candidates for the current PU is 1 (S 730 ).
- the AMVP candidate configurations and the AMVP candidate search orders i.e. listing orders
- the motion vector of the AMVP candidate is set as a motion vector predictor of the current PU (S 740 ).
- an AMVP candidate list is generated by searching effective AMVP candidates (S 750 ).
- the AMVP candidate configurations and the AMVP candidate search orders i.e. listing orders
- a VLC table corresponding to the number of AMVP candidates is selected (S 760 ).
- An AMVP index corresponding to the AMVP codeword is reconstructed (S 770 ).
- An AMVP candidate corresponding to the AMVP index is selected from the AMVP candidate list and the motion vector of the AMVO candidate is set as a motion vector predictor of the current PU (S 780 ).
- the sum of the motion vector predictor obtained in the step S 740 or S 780 and the differential motion vector obtained in the step S 710 is set as a final motion vector of the current block (S 790 ).
- FIG. 12 is a flowchart illustrating a motion vector decoding operation, when the number of AMVP candidates is fixed.
- the reference picture index and differential motion vector of a current PU are parsed (S 810 ).
- AMVP candidates include a spatial AMVP candidate and a temporal
- the temporal AMVP candidate may not be used.
- the AMVP candidate may be omitted for a 4 ⁇ 4 PU.
- an AMVP candidate is generated (S 840 ).
- the predetermined value may be 2 or 3.
- the motion vector of the effective PU may be added.
- the motion vector of the effective PU may be added.
- an AMVP candidate with motion vector 0 may be added.
- An AMVP candidate list is generated using the effective AMVP candidates and/or the generated AMVP candidate (S 850 ).
- the step S 850 may be performed after the step S 820 . In this case, the step S 850 follows the step S 840 . How to generate a candidate list has been described before with reference to FIG. 6 .
- An AMVP index corresponding to an AMVP codeword is recovered (S 860 ).
- the AMVP index may be encoded by fixed length coding.
- an AMVP candidate corresponding to the AMVP index is searched from the AMVP candidate list (S 870 ).
- the searched AMVP candidate is determined to be an AMVP predictor.
- the motion vector of the AMVP predictor is determined to be the motion vector of the current PU (S 880 ).
- the sum of the differential motion vector obtained in the step S 810 and the motion vector predictor obtained in the step S 880 is set as a final motion vector of the current PU and the reference picture index obtained in the step S 810 is set as the reference picture index of the current PU (S 880 ).
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
A method for generating a prediction block in Advanced Motion Vector Prediction (AMVP) mode to reconstruct a prediction-coded video signal using a motion vector approximate to original motion information. An AMVP candidate list is generated using effective spatial and temporal AMVP candidates for a current Prediction Unit (PU). If the number of the effective AMVP candidates is smaller than a predetermined value, a motion vector having a predetermined value as a candidate to the AMVP candidate list. Then a motion vector corresponding to an AMVP index of the current PU from among motion vectors included in the AMVP candidate list is determined to be a motion vector predictor of the current PU.
Description
- The present invention relates to a method for generating a prediction block of an image that has been encoded in Advanced Motion Vector Prediction (AMVP) mode, and more particularly, to a method for decoding motion information encoded in AMVP mode and generating a prediction block based on the motion information.
- Many techniques have been proposed to effectively compress a video signal with a video quality maintained. Particularly, inter-prediction coding is one of the most effective video compression techniques, in which a block similar to a current block is extracted from a previous picture and the difference between the current block and the extracted block is encoded.
- However, motion information about each block should be additionally transmitted in the inter-prediction coding scheme, with a coded residual block. Therefore, effective coding of motion information that reduces the amount of data is another video compression technique.
- In motion estimation coding, a block best matching to a current block is searched in a predetermined search range of a reference picture using a predetermined evaluation function. Once the best matching block is searched in the reference picture, only the residue between the current block and the best matching block is transmitted, thereby increasing a data compression rate.
- To decode the current block wencoded through motion estimation, information about a motion vector representing a difference between a position of the current block and that of the best matching block is needed. Thus, the motion vector information is encoded and inserted into a bit stream during coding. If the motion vector information is simply encoded and inserted, overhead is increased, thereby decreasing the compression rate of video data.
- Accordingly, the motion vector of the current block is predicted using neighboring blocks and only the difference between a motion vector predictor resulting from the prediction and the original motion vector is encoded and transmitted, thereby compressing the motion vector information in the inter-prediction coding scheme.
- In H.264, the motion vector predictor of the current block is determined to be a median value (mvA, mvB, mvC). Since the neighboring blocks are likely to be similar to one another, the median value of the motion vectors of the neighboring blocks is determined to be the motion vector of the current block.
- However, if one or more of the motion vectors of the neighboring blocks are different from the motion vector of the current block, the median value does not predict the motion vector of the current block effectively.
- In addition, as prediction blocks are larger in size and diversified, the number of reference pictures is increased. Thus, the data amount of a residual block is reduced but the amount of motion information to be transmitted (a motion vector and a reference picture index) is increased.
- Accordingly, there exists a need for a technique for more effectively reducing the amount of motion information to be transmitted. In addition, a technique for efficiently reconstructing motion information encoded in the above technique is needed.
- An object of the present invention devised to solve the problem lies on a method for generating a prediction block by effectively reconstructing motion information encoded in AMVP mode.
- The object of the present invention can be achieved by providing a method for generating a prediction block in AMVP mode, including reconstructing a reference picture index and a differential motion vector of a current Prediction Unit (PU), searching an effective spatial AMVP candidate for the current PU, searching an effective temporal AMVP candidate for the current PU, generating an AMVP candidate list using the effective spatial and temporal AMVP candidates, adding a motion vector having a predetermined value as a candidate to the AMVP candidate list, when the number of the effective AMVP candidates is smaller than a predetermined number, determining a motion vector corresponding to an AMVP index of the current PU from among motion vectors included in the AMVP candidate list to be a motion vector predictor of the current PU, reconstructing a motion vector of the current PU using the differential motion vector and the motion vector predictor, and generating a prediction block corresponding to a position indicated by the reconstructed motion vector within a reference picture indicated by the reference picture index.
- In the method for generating a prediction block in AMVP mode according to the present invention, a reference picture index and a differential motion vector of a current prediction unit are reconstructed and an AMVP candidate list is made using effective spatial and temporal AMVP candidates of the current prediction unit. If the number of the effective AMVP candidates is smaller than a predetermined number, a motion vector having a predetermined value is added to the AMVP candidate list. Then, a motion vector corresponding to an AMVP index of the current prediction unit is selected as a motion vector predictor of the current prediction unit from among motion vectors included in the AMVP candidate list. A motion vector of the current prediction unit is reconstructed using the differential motion vector and the motion vector predictor and a prediction block corresponding to a position indicated by the reconstructed motion vector in a reference picture indicated by the reference picture index is generated.
- Since motion information of the current prediction unit is predicted better using the spatial and temporal motion vector candidates, the amount of coded information is reduced. Furthermore, an accurate prediction block can be generated fast by decoding motion information encoded in AMVP mode very effectively.
- The accompanying drawings, which are included to provide a further understanding of the invention, illustrate embodiments of the invention and together with the description serve to explain the principle of the invention.
- In the drawings:
-
FIG. 1 is a block diagram of a video encoder according to an embodiment of the present invention; -
FIG. 2 is a flowchart illustrating an inter-prediction coding operation according to an embodiment of the present invention; -
FIG. 3 is a flowchart illustrating a merge coding operation according to an embodiment of the present invention; -
FIG. 4 illustrates the positions of merge candidates according to an embodiment of the present invention; -
FIG. 5 illustrates the positions of merge candidates according to another embodiment of the present invention; -
FIG. 6 is a flowchart illustrating an AMVP coding operation according to an embodiment of the present invention; -
FIG. 7 is a block diagram of a video decoder according to an embodiment of the present invention; -
FIG. 8 is a flowchart illustrating an inter-prediction decoding operation according to an embodiment of the present invention; -
FIG. 9 is a flowchart illustrating a merge-mode motion vector decoding operation according to an embodiment of the present invention; -
FIG. 10 is a flowchart illustrating a merge-mode motion vector decoding operation according to another embodiment of the present invention; -
FIG. 11 is a flowchart illustrating an AMVP-mode motion vector decoding operation according to an embodiment of the present invention; and -
FIG. 12 is a flowchart illustrating an AMVP-mode motion vector decoding operation according to another embodiment of the present invention. -
FIG. 1 is a block diagram of a video encoder according to an embodiment of the present invention. - Referring to
FIG. 1 , a video encoder 100 according to the present invention includes apicture divider 110, atransformer 120, aquantizer 130, ascanner 131, anentropy encoder 140, an intra-predictor 150, an inter-predictor 160, aninverse quantizer 135, aninverse transformer 125, a post-processor 170, apicture storage 180, a subtractor 190, and an adder 195. - The picture divider 110 partitions every Largest Coding Unit (LCU) of a picture into CUs each having a predetermined size by analyzing an input video signal, determines a prediction mode, and determines a size of a Prediction Unit (PU) for each CU. The
picture divider 110 provides a PU to be encoded to the intra-predictor 150 or the inter-predictor 160 according to a prediction mode (or prediction method). - The
transformer 120 transforms a residual block which indicates a residual signal between the original block of an input PU and a prediction block generated from the intra-predictor 150 or the inter-predictor 160. The residual block is composed of CU or PU. The residual block is divided into optimum transform units and then transformed. A transform matrix may be differently determined based on a prediction mode (i.e. inter-prediction mode or intra-prediction mode). Because an intra-prediction residual signal includes directionality corresponding to the intra-prediction mode, a transform matrix may be determined for the intra-prediction residual signal adaptively according to the intra-prediction mode. Transform units may be transformed by two (horizontal and vertical) one-dimensional transform matrices. For example, a predetermined single transform matrix is determined for inter-prediction. On the other hand, in case of intra-prediction, if the intra-prediction mode is horizontal, the residual block is likely to be directional horizontally and thus a Discrete Cosine Transform (DCT)-based integer matrix and a Discrete Sine Transform (DST)-based or Karhunen-Loeve Transform (KLT)-based integer matrix are respectively applied vertically and horizontally. If the intra-prediction mode is vertical, a DST-based or KLT-based integer matrix and a DCT-based integer matrix are respectively applied vertically and horizontally. In DC mode, a DCT-based integer matrix is applied in both directions. In addition, in case of intra-prediction, a transform matrix may be determined adaptively according to the size of a transform unit. - The
quantizer 130 determines a quantization step size to quantize the coefficients of the residual block transformed using the transform matrix. The quantization step size is determined for each CU of a predetermined size or larger (hereinafter, referred to as a quantization unit). The predetermined size may be 8×8 or 16×16. The coefficients of the transformed block are quantized using the determined quantization step size and the quantization matrix determined according to the prediction mode. Thequantizer 130 uses the quantization step size of a quantization unit adjacent to a current quantization unit as a quantization step size predictor of the current quantization unit. - The
quantizer 130 may generate the quantization step size predictor of the current quantization unit using one or two effective quantization step sizes resulting from sequential search of left, upper, and top-left quantization units adjacent to the current quantization unit. For example, the first one of effective quantization step sizes detected by searching the left, upper, and top-left quantization units in this order may be determined to be the quantization step size predictor. In addition, the average of the two effective quantization step sizes may be determined to be the quantization step size predictor. If only one quantization step size is effective, it may be determined to be the quantization step size predictor. Once the quantization step size predictor is determined, the difference between the quantization step size of the current CU and the quantization step size predictor is transmitted to theentropy encoder 140. - All of the left, upper, and top-left CUs adjacent to the current CU may not exist. However, there may be a previous CU in an LCU according to a coding order. Therefore, the quantization step sizes of the adjacent quantization units of the current CU and the quantization step size of the quantization unit previously encoded in the coding order within the LCU may be candidates. In this case, 1) the left quantization unit of the current CU, 2) the upper quantization unit of the current CU, 3) the top-left quantization unit of the current CU, and 4) the previously encoded quantization unit may be prioritized in a descending order. The order of priority levels may be changed and the top-left quantization unit may be omitted.
- The quantized transformed block is provided to the
inverse quantizer 135 and thescanner 131. - The
scanner 131 converts the coefficients of the quantized transformed block to one-dimensional quantization coefficients by scanning the coefficients of the quantized transformed block. Since the coefficient distribution of the transformed block may be dependent on the intra-prediction mode after quantization, a scanning scheme is determined according to the intra-prediction mode. In addition, the coefficient scanning scheme may vary with the size of a transform unit. A scan pattern may be different according to a directional intra-prediction mode. The quantized coefficients are scanned in a reverse order. - In the case where the quantized coefficients are divided into a plurality of subsets, the same scan pattern applies to the quantization coefficients of each subset. A zigzag or diagonal scan pattern applies between subsets. Although scanning from a main subset including a DC to the remaining subsets in a forward direction is preferable, scanning in a reverse direction is also possible. The inter-subset scan pattern may be set to be identical to the intra-subset scan pattern. In this case, the inter-subset scan pattern is determined according to an intra-prediction mode. Meanwhile, the video encoder transmits information indicating the position of a last non-zero quantized coefficient in the transform unit to a video decoder. The video encoder may also transmit information indicating the position of a last non-zero quantized coefficient in each subset to the decoder.
- The
inverse quantizer 135 dequantizes the quantized coefficients. Theinverse transformer 125 reconstructs a spatial-domain residual block from the inverse-quantized transformed coefficients. The adder generates a reconstructed block by adding the residual block reconstructed by theinverse transformer 125 to a prediction block received from the intra-predictor 150 or the inter-predictor 160. - The post-processor 170 performs deblocking filtering to eliminate blocking artifact from a reconstructed picture, adaptive offset application to compensate for a difference from the original picture on a pixel basis, and adaptive loop filtering to compensate for a difference from the original picture on a CU basis.
- Deblocking filtering is preferably applied to the boundary between a PU and a transform unit which are of a predetermined size or larger. The size may be 8×8. The deblocking filtering process includes determining a boundary to be filtered, determining a boundary filtering strength to apply to the boundary, determining whether to apply a deblocking filter, and selecting a filter to apply to the boundary when determined to apply the deblocking filter.
- It is determined whether to apply a deblocking filter according to i) whether the boundary filtering strength is larger than 0 and ii) whether a variation of pixels at the boundary between two blocks (a P block and a Q block) adjacent to the boundary to be filtered is smaller than a first reference value determined based on a quantization parameter.
- For the deblocking filtering, two or more filters are preferable. If the absolute value of the difference between two pixels at the block boundary is equal to or larger than a second reference value, a filter that performs relatively weak filtering is selected. The second reference value is determined by the quantization parameter and the boundary filtering strength. Adaptive offset application is intended to reduce the difference (i.e. distortion) between pixels in a deblocking-filtered picture and original pixels. It may be determined whether to perform the adaptive offset applying process on a picture basis or on a slice basis. A picture or slice may be divided into a plurality of offset areas and an offset type may be determined per offset area. There may be a predetermined number of (e.g. 4) edge offset types and two band offset types. In case of an edge offset type, the edge type of each pixel is determined and an offset corresponding to the edge type is applied to the pixel. The edge type is determined based on the distribution of two pixel values adjacent to a current pixel. Adaptive loop filtering may be performed based on a comparison value between an original picture and a reconstructed picture that has been subjected to deblocking filtering or adaptive offset application. Adaptive loop filtering may apply across all pixels included in a 4×4 or 8×8 block. It may be determined for each CU whether to apply adaptive loop filtering. The size and coefficient of a loop filter may be different for each CU. Information indicating whether an adaptive loop filter is used for each CU may be included in each slice header. In case of a chrominance signal, the determination may be made on a picture basis. Unlike luminance, the loop filter may be rectangular.
- A determination as to whether to use adaptive loop filtering may be made on a slice basis. Therefore, information indicating whether the adaptive loop filtering is used for a current slice is included in a slice header or a picture header. If the information indicates that the adaptive loop filtering is used for the current slice, the slice header or picture header further may incude information indicating the horizontal and/or vertical filter length of a luminance component used in the adaptive loop filtering.
- The slice header or picture header may include information indicating the number of filter sets. If the number of filter sets is 2 or larger, filter coefficients may be encoded in a prediction scheme. Accordingly, the slice header or picture header may include information indicating whether filter coefficients are encoded in a prediction scheme. If the prediction scheme is used, predicted filter coefficients are included in the slice header or picture header.
- Meanwhile, chrominance components as well as luminance components may be filtered adaptively. Therefore, information indicating whether each chrominance component is filtered or not may be included in the slice header or picture header. In this case, information indicating whether chrominance components Cr and Cb are filtered may be jointly encoded (i.e. multiplexed coding) to thereby reduce the number of bits. Both chrominance components Cr and Cb are not filtered in many cases to thereby reduce complexity. Thus, if both chrominance components Cr and Cb are not filtered, a lowest index is assigned and entropy-encoded. If both chrominance components Or and Cb are filtered, a highest index is assigned and entropy-encoded.
- The
picture storage 180 receives post-processed image data from the post-processor 170, reconstructs and stores an image on a picture basis. A picture may be an image in a frame or field. Thepicture storage 180 includes a buffer (not shown) for storing a plurality of pictures. - The inter-predictor 160 estimates a motion using at least one reference picture stored in the
picture storage 180 and determines a reference picture index identifying the reference picture and a motion vector. The inter-predictor 160 extracts and outputs a prediction block corresponding to a PU to be encoded from the reference picture used for motion estimation among the plurality of reference pictures stored in thepicture storage 180, according to the determined reference picture index and motion vector. - The intra-predictor 150 performs intra-prediction coding using reconfigured pixel values of a picture including the current PU. The intra-predictor 150 receives the current PU to be prediction-encoded, selects one of a predetermined number of intra-prediction modes according to the size of the current block, and performs intra-prediction in the selected intra-prediction mode. The intra-predictor 150 adaptively filters reference pixels to generate an intra-prediction block. If the reference pixels are not available, the intra-predictor 150 may generate reference pixels using available reference pixels. The
entropy encoder 140 entropy-encodes the quantized coefficients received from thequantizer 130, intra-prediction information received from the intra-predictor 150, and motion information received from the inter-predictor 160. -
FIG. 2 is a flowchart illustrating an inter-prediction coding operation according to an embodiment of the present invention. - The inter-prediction coding operation includes determining motion information of a current PU, generating a prediction block, generating a residual block, encoding the residual block, and encoding the motion information. Hereinafter, a PU and a block will be used interchangeably.
- (1) Determination of motion information of a current PU (S110)
- The motion information of the current PU includes a reference picture index to be referred to for the current PU and a motion vector.
- To determine a prediction block of the current PU, one of one or more reconstructed reference pictures is determined to be a reference picture for the current PU and motion information indicating the position of the prediction block in the reference picture is determined.
- The reference picture index for the current block may be different according to the inter-prediction mode of the current block. For example, if the current block is in a single-directional prediction mode, the reference picture index indicates one of reference pictures listed in List 0 (L0). On the other hand, if the current block is in a bi-directional prediction mode, the motion information may include reference picture indexes indicating one of reference pictures listed in LO and one of reference pictures listed in List 1 (L1). In addition, if the current block is in a bi-directional prediction mode, the motion information may include a reference picture index indicating one or two of reference pictures included in a List Combination (LC) being a combination of L0 and L1.
- The motion vector indicates the position of the prediction block in a picture indicated by the reference picture index. The motion vector may have an integer-pixel resolution or a ⅛ or 1/16 pixel resolution. If the motion vector does not have an integer-pixel resolution, the prediction block is generated from integer pixels.
- (2) Generation of a prediction block (S120)
- If the motion vector has an integer-pixel resolution, a prediction block of the current PU is generated by copying a corresponding block at the position indicated by the motion vector in the picture indicated by the reference picture index.
- On the other hand, if the motion vector does not have an integer-pixel resolution, the pixels of a prediction block are generated using integer pixels in the picture indicated by the reference picture index. In case of luminance pixels, prediction pixels may be generated using an 8-tap interpolation filter. In case of chrominance pixels, prediction pixels may be generated using a 4-tap interpolation filter.
- (3) Generation of a residual block (S130) and coding of the residual block (S140)
- When prediction blocks of the current PU are generated, a residual block is generated based on a difference between the current PU and the prediction block. The size of the residual block may be different from the size of the current PU. For example, if the current PU is of size 2N×2N, the current PU and the residual block are of the same size. However, if the current PU is of size 2N×N or N×2N, the residual block may be a 2N×2N block. That is, when the current PU is a 2N×N block, the residual block may be configured by combining two 2N×N residual blocks. In this case, to overcome the discontinuity of the boundary between two 2N×N prediction blocks, a 2N×2N prediction block is generated by overlap-smoothing boundary pixels and then a residual block is generated using the difference between the 2N×2N original block (two current blocks) and the 2N×2N prediction block.
- When the residual block is generated, the residual block is encoded in units of a transform coding size. That is, the residual block is subjected to transform encoding, quantization, and entropy encoding in units of a transform coding size. The transform coding size may be determined in a quad-tree scheme according to the size of the residual block. That is, transform coding uses integer-based DCT.
- The transform-encoded block is quantized using a quantization matrix. The quantized matrix is entropy-encoded by Context-Adaptive Binary Arithmetic Coding (CABAC) or Context-Adaptive Variable-Length Coding (CAVLC).
- (4) Coding of the motion information (S150)
- The motion information of the current PU is encoded using motion information of PUs adjacent to the current PU. The motion information of the current PU is subjected to merge coding or AMVP coding. Therefore, it is determined whether to encode the motion information of the current PU by merge coding or AMVP coding and encodes the motion information of the current PU according to the determined coding scheme.
- A description will be given below of a merge coding scheme with reference to
FIG. 3 . - Referring to
FIG. 3 , spatial merge candidates and temporal merge candidates are derived (S210 and S220). For the convenience' sake, the spatial merge candidates are first derived and then the temporal merge candidates are derived, by way of example. However, the present invention is not limited to the order of deriving the spatial and temporal merge candidates. For example, the temporal merge candidates first derived and then the spatial merge candidates may be derived, or the spatial and temporal merge candidates may be derived in parallel. - 1) Spatial Merge Candidates
- Spatial merge candidates may be configured in one of the following embodiments. Spatial merge candidate configuration information may be transmitted to the video decoder. In this case, the spatial merge candidate configuration information may indicate one of the following embodiments or information indicating the number of merge candidates in one of the following embodiments.
- (a) Embodiment 1 (Spatial Merge Candidate Configuration 1)
- As illustrated in
FIG. 4 , a plurality of spatial merge candidates may be a left PU (block A), an upper PU (block B), a top-right PU (block C), and a bottom-left PU (block D) adjacent to the current PU. In this case, all of the effective PUs may be candidates or two effective PUs may be selected as candidates by scanning the blocks A to D in the order of A, B, C and D. If there are a plurality of PUs to the left of the current PU, an effective uppermost PU or a largest effective PU may be determined as the left - PU adjacent to the current PU from among the plurality of left PUs. Similarly, if there are a plurality of PUs above the current PU, an effective leftmost PU or a largest effective PU may be determined as the upper PU adjacent to the current PU from among the plurality of upper PUs.
- (b) Embodiment 2 (Spatial Merge Candidate Configuration 2)
- As illustrated in
FIG. 5 , a plurality of spatial merge candidates may be two effective PUs selected from among a left PU (block A), an upper PU (block B), a top-right PU (block C), a bottom-left PU (block D), and a top-left PU (block E) adjacent to the current PU by scanning the blocks A to E in the order of A, B, C, D and E. Herein, the left PU may be adjacent to the block E, not to the block D. Similarly, the upper PU may be adjacent to the block E, not to the block C. - (c) Embodiment 3 (Spatial Merge Candidate Configuration 3)
- As illustrated in
FIG. 5 , the left block (the block A), the upper block (the block B), the top-right block (the block C), the bottom-left block (the block D), and the top-left block (the block E) adjacent to the current PU may be candidates in this order, if they are effective. In this case, the block E is available if one or more of the blocks A to D are not effective. - (d) Embodiment 4 (Spatial Merge Candidate Configuration 4)
- As illustrated in
FIG. 5 , a plurality of spatial merge candidates may include the left PU (the block A), the upper PU (the block B), and a corner PU (one of the blocks C, D and E) adjacent to the current PU. The corner PU is a first effective one of the top-right PU (the block C), bottom-left PU (the block D), and top-left PU (the block E) of the current PU by scanning them in the order of C, D and E. - In the above embodiments, motion information of spatial merge candidates above the current PU may be set differently according to the position of the current PU. For example, if the current PU is at the upper boundary of an LCU, motion information of an upper PU (block B, C or E) adjacent to the current PU may be its own motion information or motion information of an adjacent PU. The motion information of the upper PU may be determined as one of its own motion information or motion information (a reference picture index and a motion vector) of an adjacent PU, according to the size and position of the current PU.
- 2) Temporal Merge Candidates
- A reference picture index and a motion vector of a temporal merge candidate are obtained in an additional process. The reference picture index of the temporal merge candidate may be obtained using the reference picture index of one of PUs spatially adjacent to the current PU. Reference picture indexes of temporal merge candidates for the current PU may be obtained using the whole or a part of reference picture indexes of the left PU (the block A), the upper PU (the block B), the top-right PU (the block C), the bottom-left PU (the block D), and the top-left PU (the block E) adjacent to the current PU. For example, the reference picture indexes of the left PU (the block A), the upper PU (the block B), and a corner block (one of the blocks C, D and E) adjacent to the current PU may be used. Additionally, the reference picture indexes of an odd number of (e.g. 3) effective PUs may be used from among the reference picture indexes of the left PU (the block A), upper PU (the block B), top-right PU (the block C), bottom-left PU (the block D), and top-left PU (the block E) adjacent to the current PU by scanning them in the order of A, B, C, D and E.
- A case where the reference picture indexes of left, upper, and corner PUs adjacent to a current PU are used to obtain the reference indexes of temporal merge candidates for the current PU will be described below.
- The reference picture index of the left PU (hereinafter, referred to as the left reference picture index), the reference picture index of the upper PU (hereinafter, referred to as the upper reference picture index), and the reference picture index of the corner PU (hereinafter, referred to as the corner reference picture index), adjacent to the current PU, are obtained. While only one of the corner PUs C, D and E is taken as a candidate, to which the present invention is not limited, it may be further contemplated in an alternative embodiment that the PUs C and D are set as candidates (thus four candidates) or the PUs C, D and E are all set as candidates (thus five candidates).
- While three or more effective reference picture indexes are used herein, all of the effective reference picture indexes or only a reference picture index at a predetermined position may be used. In the absence of any effective reference picture index, reference picture index 0 may be set as the reference picture index of a temporal merge candidate.
- If a plurality of reference picture indexes are used, a reference picture index that is most frequently used from among the reference picture indexes may be set as the reference picture index of a temporal merge candidate.
- When a plurality of reference picture indexes are most frequently used, a reference picture index having a minimum value among the plurality of reference picture indexes or the reference picture index of a left or upper block may be set as the reference picture index of a temporal merge candidate.
- Then, an operation for obtaining a motion vector of the temporal merge candidate will be described.
- A picture including the temporal merge candidate block (hereinafter, referred to as a temporal merge candidate picture) is determined. The temporal merge candidate picture may be set to a picture with reference picture index O. In this case, if the slice type is P, the first picture (i.e. a picture with index 0) in listO is set as a temporal merge candidate picture. If the slice type is B, the first picture of a reference picture list indicated by a flag that indicates a temporal merge candidate list in a slice header is set as a temporal merge candidate picture. For example, if the flag is 1, a temporal merge candidate picture may be selected from list0 and if the flag is 0, a temporal merge candidate picture may be selected from list1.
- Subsequently, a temporal merge candidate block is obtained from the temporal merge candidate picture. One of a plurality of blocks corresponding to the current PU within the temporal merge candidate picture may be determined as the temporal merge candidate block. In this case, the plurality of blocks corresponding to the current PU are prioritized and a first effective corresponding block is selected as the temporal merge candidate block according to the priority levels.
- For example, a bottom-left corner block adjacent to a block corresponding to the current PU within the temporal merge candidate picture or a bottom-left block included in the block corresponding to the current PU within the temporal merge candidate picture may be set as a first candidate block. In addition, a block including a top-left pixel or a block including a bottom-right pixel, at the center of the block corresponding to the current PU within the temporal merge candidate picture may be set as a second candidate block.
- If the first candidate block is effective, the first candidate block is set as the temporal merge candidate block. On the other hand, if not the first candidate block but the second candidate block is effective, the second candidate block is set as the temporal merge candidate block. Or only the second candidate block may be used according to the position of the current PU. The current PU may be located in a slice or an LCU.
- When the temporal merge candidate prediction block is determined, the motion vector of the temporal merge candidate prediction block is set as a temporal merge candidate motion vector.
- Meanwhile, the temporal merge candidate may be adaptively off according to the size of the current PU. For example, if the current PU is a 4×4 block, the temporal merge candidate may be off to reduce complexity.
- Then a merge candidate list is generated (S230).
- The merge candidate list is generated using the effective merge candidates in a predetermined order. If a plurality of merge candidates have the same motion information (i.e. the same motion vector and the same reference picture index), a lower-ranked merge candidate is deleted from the merge candidate list.
- For example, the predetermined order may be A, B, Col, C, and D in Embodiment 1 (spatial merge candidate configuration 1). Herein, Col represents a temporal merge candidate.
- In Embodiment 2 (spatial merge candidate configuration 2), the merge candidate list may be generated in the order of two effective PUs and Col, the two effective PUs being determined by scanning the blocks A, B,
- C, D and E in this order.
- In Embodiment 3 (spatial merge candidate configuration 3), the predetermined order may be A, B, Col, C, D. If at least one of the blocks A, B, C and D is not effective, the block E may be added. In this case, the block E may be added at the lowest rank. In addition, the merge candidate list may be generated in the order of (one of A and D), (one of C, B and E), and Col.
- In Embodiment 4 (spatial merge candidate configuration 4), the predetermined order may be A, B, Col, Corner, or A, B, Corner, Col.
- The number of merge candidates may be determined on a slice or LCU basis. In this case, the merge candidate list is generated in a predetermined order in the above embodiments.
- It is determined whether to generate merge candidates (S240). In the case where the number of merge candidates is set to a fixed value, if the number of effective merge candidates is smaller than the fixed value, merge candidates are generated (S250). The generated merge candidates are added to the merge candidate list. In this case, the generated merge candidates are added below the lowest ranked merge candidate in the merge candidate list. If a plurality of merge candidates are added, they are added in a predetermined order.
- The added merge candidate may be a candidate with motion vector 0 and reference picture index 0 (a first added merge candidate). In addition, the added merge candidate may be a candidate generated by combining the motion information of effective merge candidates (a second added merge candidate). For example, a candidate may be generated by combining the motion information (the reference picture index) of a temporal merge candidate with the motion information (motion vector) of an effective spatial merge candidate and then added to the merge candidate list. Merge candidates may be added in the order of the first and second added merge candidates or in the reverse order.
- On the contrary, if the number of merge candidates is variable and only effective merge candidates are used, the steps S240 and S250 may be omitted.
- A merge candidate is determined as a merge predictor of the current PU, from the generated merge candidate list (S260).
- Then the index of the merge predictor (i.e. the merge index) is encoded (S270). In case of a single merge candidate, the merge index is omitted. On the other hand, in case of two or more merge candidates, the merge index is encoded.
- The merge index may be encoded by fixed-length coding or CAVLC. If CAVLC is adopted, the merge index for codeword mapping may be adjusted according to a PU shape and a PU index.
- The number of merge candidates may be variable. In this case, a codeword corresponding to the merge index is selected using a table that is determined according to the number of effective merge candidates.
- The number of merge candidates may be fixed. In this case, a codeword corresponding to the merge index is selected using a single table corresponding to the number of merge candidates.
- With reference to
FIG. 6 , an AMVP coding scheme will be described. - Referring to
FIG. 6 , a spatial AMVP candidate and a temporal AMVP candidate are derived (S310 and S320). - 1) Spatial AMVP Candidates
- (a) Spatial AMVP Candidate Configuration 1
- As illustrated in
FIG. 5 , spatial AMVP candidates may include one (a left candidate) of the left PU (the block A) and bottom-left PU (the block D) adjacent to the current PU and one (an upper candidate) of the right PU (the block B), top-right PU (the block C), and top-left PU (the block E) adjacent to the current PU. The motion vector of a first effective PU is selected as the left or upper candidate by scanning PUs in a predetermined order. The left PUs may be scanned in the order of A and D or in the order of D and A. The upper PUs may be scanned in the order of B, C and E or in the order of C, B and E. - (b) Spatial AMVP Candidate Configuration 2
- As illustrated in
FIG. 4 , the spatial AMVP candidates may be two effective PUs selected from the left PU (the block A), upper PU (the block B), top-right PU (the block C), and bottom-left PU (the block D) adjacent to the current PU by scanning them in the order of A, B, C and D. In this case, all of effective PUs may be candidates or two effective PUs obtained by scanning the blocks A, B, C and D in this order may be candidates. If there are a plurality of PUs to the left of the current PU, an effective uppermost PU or an effective PU having a largest area may be set as the left PU. Similarly, if there are a plurality of PUs above the current PU, an effective leftmost PU or an effective PU having a largest area may be set as the upper PU. - (c) Spatial AMVP Candidate Configuration 3
- As illustrated in
FIG. 5 , spatial AMVP candidates may include two effective PUs obtained by scanning the left PU (the block A), right PU (the block B), top-right PU (the block C), bottom-left PU (the block D), and top-left PU (the block E) adjacent to the current PU in this order. The left PU may be adjacent to the block E, not to the block D. Likewise, the upper PU may be adjacent to the block E, not to the block C. - (d) Spatial AMVP Candidate Configuration 4
- As illustrated in
FIG. 5 , spatial AMVP candidates may be four blocks selected from among the left PU (the block A), upper PU (the block B), top-right PU (the block C), bottom-left PU (the block D), and top-left PU (the block E) adjacent to the current PU. In this case, the block E may be available when one or more of blocks A to D are not effective. - (e) Spatial AMVP Candidate Configuration 5
- As illustrated in
FIG. 5 , spatial AMVP candidates may include the left PU (the block A), upper PU (the block B), and a corner PU (one of the blocks C, D and E) adjacent to the current PU. The corner PU is a first effective one of the top-right PU (the block C), bottom-left PU (the block D), and top-left PU (block E) of the current PU by scanning them in the order of C, D and E. - In the above embodiments, motion information of AMVP candidates above the current PU may be set differently according to the position of the current PU. For example, if the current PU is at the upper boundary of an LCU, the motion vector of an upper PU (the block B, C or E) adjacent to the current PU may be its own motion vector or the motion vector of an adjacent PU. The motion vector of the upper PU may be determined as its own motion vector or the motion vector of an adjacent PU according to the size and position of the current PU.
- 2) Temporal AMVP Candidate
- Because a temporal AMVP candidate needs only motion information, there is no need for obtaining a reference picture index, compared to a merge candidate. An operation for obtaining the motion vector of a temporal AMVP candidate will first be described.
- A picture including the temporal AMVP candidate block (hereinafter, referred to as a temporal AMVP candidate picture) is determined. The temporal AMVP candidate picture may be set to a picture with reference picture index 0. In this case, if the slice type is P, the first picture (i.e. a picture with index 0) in listO is set as a temporal AMVP candidate picture. If the slice type is B, the first picture of a reference picture list indicated by a flag that indicates a temporal AVMP candidate list in a slice header is set as a temporal AVMP candidate picture.
- Then, a temporal AMVP candidate block is obtained from the temporal AMVP candidate picture. This is performed in the same manner as the operation for obtaining a temporal merge candidate block and thus its description will not be provided herein.
- Meanwhile, the temporal AMVP candidate may be adaptively off according to the size of the current PU. For example, if the current PU is a 4×4 block, the temporal AMVP candidate may be off to reduce complexity.
- Then an AMVP candidate list is generated (S330). The AMVP candidate list is generated using effective AMVP candidates in a predetermined order. If a plurality of AMVP candidates have the same motion information (i.e. it is not necessary that the reference pictures are identical), lower-ranked AMVP candidates are deleted from the AMVP candidate list.
- In spatial AMVP candidate configuration 1, the predetermined order is one of A and D (the order of A and D or the order of D and A), one of B, C and E (the order of B, C and E or the order of C, B and E), and Col, or Col, one of A and D, and one of B, C and E. Herein, Col represents a temporal AMVP candidate.
- In spatial AMVP candidate configuration 2, the predetermined order is A, B, Col, C, D or C, D, Col, A, B.
- In spatial AMVP candidate configuration 3, the predetermined order is (two effective ones of A, B, C, D and E in this order) and Col or Col and (two effective ones of A, B, C, D and E in this order).
- In spatial AMVP candidate configuration 4, the predetermined order is A, B, Col, C, and D. If at least one of the blocks A, B, C and D is not effective, the block E may be added at the lowest rank.
- In spatial AMVP candidate configuration 5, the predetermined order is A, B, Col, and corner.
- It is determined whether to generate AMVP candidates (S340). In the case where the number of AMVP candidates is set to a fixed value, if the number of effective AMVP candidates is smaller than the fixed value, AMVP candidates are generated (S350). The fixed value may be 2 or 3. The generated AMVP candidates are added below the lowest-ranked AMVP candidate in the AMVP candidate list. The added AMVP candidate may be a candidate with motion vector 0.
- On the contrary, if the number of AMVP candidates is variable and only effective AMVP candidates are used, the steps S340 and S350 may be omitted.
- A motion vector predictor of the current PU is selected from the AMVP candidate list (S360). An AMVP index indicating the predictor is generated.
- Then, a differential motion vector between the motion vector of the current PU and the motion vector predictor is generated (S370).
- The reference picture index of the current PU, the differential motion vector, and the AMVP index are encoded (S380). In case of a single AMVP candidate, the AMVP index may be omitted.
- The AMVP index may be encoded by fixed-length coding or CAVLC. If CAVLC is adopted, the AMVP index for codeword mapping may be adjusted according to a PU shape and a PU index.
- The number of AMVP candidates may be variable. In this case, a codeword corresponding to the AMVP index is selected using a table determined according to the number of effective AMVP candidates.
- Meanwhile, the merge candidate block may be identical to the AMVP candidate block. For example, in the case where the AMVP candidate configuration is identical to the merge candidate configuration. Thus, encoder complexity can be reduced.
-
FIG. 7 is a block diagram of a video decoder according to an embodiment of the present invention. - Referring to
FIG. 7 , the video decoder of the present invention includes anentropy decoder 210, an inverse quantizer/inverse transformer 220, anadder 270, adeblocking filter 250, apicture storage 260, an intra-predictor 230, amotion compensation predictor 240, and an intra/inter switch 280. - The
entropy decoder 210 separates an intra-prediction mode index, motion information, and a quantized coefficient sequence from a coded bit stream received from the video encoder by decoding the coded bit stream. Theentropy decoder 210 provides the decoded motion information to themotion compensation predictor 240, the intra-prediction mode index to the intra-predictor 230 and the inverse quantizer/inverse transformer 220, and the quantized coefficient sequence to the inverse quantizer/inverse transformer 220. - The inverse quantizer/
inverse transformer 220 converts the quantized coefficient sequence to a two-dimensional array of dequantized coefficients. For the conversion, one of a plurality of scan patterns is selected based on at least one of the prediction mode (i.e. one of intra-prediction and inter-prediction) and intra-prediction mode of the current block. The intra-prediction mode is received from the intra-predictor 230 or theentropy decoder 210. - The inverse quantizer/
inverse transformer 220 reconstructs quantized coefficients from the two-dimensional array of dequantized coefficients using a quantization matrix selected from among a plurality of quantization matrices. Even for blocks having the same size, the inverse quantizer/inverse transformer 220 selects a quantization matrix based on at least one of the prediction mode and intra-prediction mode of a current block. Then a residual block is reconstructed by inversely transforming the reconstructed quantized coefficients. - The
adder 270 adds the reconstructed residual block received from the inverse quantizer/inverse transformer 220 to a prediction block generated from the intra-predictor 230 or themotion compensation predictor 240, thereby reconstructing an image block. - The
deblocking filter 250 performs a deblocking filtering for the reconstructed image generated by theadder 270. Thus, deblocking artifact caused by image loss during quantization may be reduced. Thepicture storage 260 includes a frame memory that preserves a local decoded image that has been deblocking-filtered by thedeblocking filter 250. - The intra-predictor 230 determines the intra-prediction mode of the current block based on the intra-prediction mode index received from the
entropy decoder 210 and generates a prediction block according to the determined intra-prediction mode. - The
motion compensation predictor 240 generates a prediction block of the current block from a picture stored in thepicture storage 260 based on the motion vector information. If motion compensation with fractional-pel accuracy is applied, the prediction block is generated using a selected interpolation filter. - The intra/
inter switch 280 provides one of the prediction block generated from the intra-predictor 230 and the prediction block generated from themotion compensation predictor 240 to theadder 270. -
FIG. 8 is a flowchart illustrating an inter-prediction decoding operation according to an embodiment of the present invention. - Referring to
FIG. 8 , the video decoder may check whether a current PU to be decoded has been encoded in SKIP mode (S405). The check may be made based on skip_flag of a CU. - If the current PU has been encoded in SKIP mode, the motion information of the current PU is decoded according to a Smotion information decoding process corresponding to the SKIP mode (S410). The motion information decoding process corresponding to the SKIP mode is the same as a motion information decoding process corresponding to a merge mode.
- A corresponding block within a reference picture, indicated by the decoded motion information of the current PU is copied, thereby generating a reconstructed block of the current PU (S415).
- On the other hand, if the current PU has not been encoded in the SKIP mode, it is determined whether the motion information of the current PU has been encoded in merge mode (S420).
- If the motion information of the current PU has been encoded in the merge mode, the motion information of the current PU is decoded in the motion information decoding process corresponding to the merge mode (S425).
- A prediction block is generated using the decoded motion information of the current PU (S430).
- If the motion information of the current PU has been encoded in the merge mode, a residual block is decoded (S435).
- Then, a reconstructed block of the current PU is generated using the prediction block and the residual block (S440).
- On the other hand, if the motion information of the current PU has not been encoded in the merge mode, the motion information of the current PU is decoded in a motion information decoding process corresponding to an AMVP mode (S445).
- Then, a prediction block is generated using the decoded motion information of the current PU (S450) and the residual block is decoded (S455). A reconstructed block is generated using the prediction block and the residual block (S460).
- The motion information decoding process is different depending on the coding pattern of the motion information of the current PU. The coding pattern of the motion information of the current PU may be one of merge mode and AMVP mode. In SKIP mode, the same motion information decoding process as in the merge mode is performed.
- First, a description will be given of a motion information decoding operation, when the coding pattern of the motion information of a current PU is the merge mode.
-
FIG. 9 is a flowchart illustrating a motion vector decoding operation, when the number of merge candidates is variable. - Referring to
FIG. 9 , it is determined whether there is any merge codeword (S510). - In the absence of a merge codeword, an effective merge candidate is searched, determining that there is a single merge candidate for the current PU (S520). Merge candidate configurations and merge candidate search orders (i.e. listing orders) have been described before with reference to
FIG. 3 . - Upon a search of an effective merge candidate, the motion information of the current PU is generated using the motion information of the merge candidate (S530). That is, the reference picture index and motion vector of the merge candidate are set as the reference picture index and motion vector of the current PU.
- In the present of a merge codeword, effective merge candidates are searched and a merge candidate list is comprised of the effective merge candidates (S540). Methods for configuring merge candidate and generating a merge candidate list have been described before with reference to
FIG. 3 . - A VLC table corresponding to the number of merge candidates is selected (S550).
- A merge index corresponding to the merge codeword is reconstructed (S560).
- A merge candidate corresponding to the merge index is selected from the merge candidate list and the motion information of the merge candidate is set as the motion information of the current PU (S570).
-
FIG. 10 is a flowchart illustrating a motion vector decoding operation, when the number of merge candidates is fixed. The number of merge candidates may be fixed on a picture or slice basis. - Referring to
FIG. 10 , effective merge candidates are searched (S610). Merge candidates include a spatial merge candidate and a temporal merge candidate. The positions of spatial merge candidates, the method for deriving the spatial merge candidates, the positions of temporal merge candidates and the method for deriving he temporal merge candidates have been described before with reference toFIG. 3 . If the current PU is smaller than a predetermined size, the temporal merge candidate may not be used. For example, the merge candidate may be omitted for a 4×4 PU. - Upon a search of effective merge candidates, it is determined whether to generate a merge candidate (S620). If the number of effective merge candidates is smaller than a predetermined value, a merge candidate is generated (S630). The merge candidate may be generated by combining the motion information of effective merge candidates. A merge candidate with motion vector 0 and reference picture index 0 may be added. Merge candidates are added in a predetermined order.
- A merge list is made using the merge candidates
- (S640). This step may be performed in combination with the steps S620 and S630. The merge candidate configurations and the merge candidate search orders (i.e. listing orders) have been described before with reference to
FIG. 3 . A merge index corresponding to a merge codeword in a received bit stream is reconstructed (S650). Since the number of merge candidates is fixed, the merge index corresponding to the merge codeword may be obtained from one decoding table corresponding to the number of merge candidates. However, a different decoding table may be used depending on whether a temporal merge candidate is used. - A candidate corresponding to the merge index is searched from the merge list (S660). The searched merge candidate is determined to be a merge predictor.
- Once the merge predictor is determined, the motion information of the current PU is generated using the motion information of the merge predictor (S670). Specifically, the motion information of the merge predictor, i.e. the reference picture index and motion vector of the merge predictor are determined to be the reference picture index and motion vector of the current PU.
- Now a description will be given of a motion information decoding operation, when the motion information coding pattern of a current PU is AMVP.
-
FIG. 11 is a flowchart illustrating a motion vector decoding operation, when the number of AMVP candidates is variable. - Referring to
FIG. 11 , the reference picture index and differential motion vector of a current PU are parsed (S710). - It is determined whether there exists an AMVP codeword (S720).
- In the absence of an AMVP codeword, an effective AMVP candidate is searched, determining that the number of AMVP candidates for the current PU is 1 (S730). The AMVP candidate configurations and the AMVP candidate search orders (i.e. listing orders) have been described before in detail with reference to
FIG. 6 . - Upon a search of an effective AMVP candidate, the motion vector of the AMVP candidate is set as a motion vector predictor of the current PU (S740).
- In the presence of an AMVP codeword, an AMVP candidate list is generated by searching effective AMVP candidates (S750). The AMVP candidate configurations and the AMVP candidate search orders (i.e. listing orders) have been described before in detail with reference to
FIG. 6 . - A VLC table corresponding to the number of AMVP candidates is selected (S760).
- An AMVP index corresponding to the AMVP codeword is reconstructed (S770).
- An AMVP candidate corresponding to the AMVP index is selected from the AMVP candidate list and the motion vector of the AMVO candidate is set as a motion vector predictor of the current PU (S780).
- The sum of the motion vector predictor obtained in the step S740 or S780 and the differential motion vector obtained in the step S710 is set as a final motion vector of the current block (S790).
-
FIG. 12 is a flowchart illustrating a motion vector decoding operation, when the number of AMVP candidates is fixed. - Referring to
FIG. 12 , the reference picture index and differential motion vector of a current PU are parsed (S810). - Effective AMVP candidates are searched (S820). AMVP candidates include a spatial AMVP candidate and a temporal
- AMVP candidate. The positions of spatial AMVP candidates, the method for deriving the spatial AMVP candidates, the positions of temporal AMVP candidates, and the method for deriving the temporal AMVP candidates have been described before with reference to
FIG. 6 . If the current PU is smaller than a predetermined size, the temporal AMVP candidate may not be used. For example, the AMVP candidate may be omitted for a 4×4 PU. - It is determined based on the number of effective AMVP candidates whether to generate an AMVP candidate
- (S830). If the number of effective AMVP candidates is smaller than a predetermined value, an AMVP candidate is generated (S840). The predetermined value may be 2 or 3.
- For example, in the case where there exists a spatial upper AMVP candidate, not a spatial left AMVP candidate, if an effective PU other than the spatial upper AMVP candidate exists, the motion vector of the effective PU may be added. On the contrary, in the case where there exists a spatial left AMVP candidate, not a spatial upper AMVP candidate, if an effective PU other than the spatial left AMVP candidate exists, the motion vector of the effective PU may be added. Or an AMVP candidate with motion vector 0 may be added.
- An AMVP candidate list is generated using the effective AMVP candidates and/or the generated AMVP candidate (S850). The step S850 may be performed after the step S820. In this case, the step S850 follows the step S840. How to generate a candidate list has been described before with reference to
FIG. 6 . - An AMVP index corresponding to an AMVP codeword is recovered (S860). The AMVP index may be encoded by fixed length coding.
- Then, an AMVP candidate corresponding to the AMVP index is searched from the AMVP candidate list (S870). The searched AMVP candidate is determined to be an AMVP predictor.
- The motion vector of the AMVP predictor is determined to be the motion vector of the current PU (S880).
- The sum of the differential motion vector obtained in the step S810 and the motion vector predictor obtained in the step S880 is set as a final motion vector of the current PU and the reference picture index obtained in the step S810 is set as the reference picture index of the current PU (S880).
- It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention cover the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents.
Claims (8)
1. A method for generating a prediction block in Advanced Motion Vector Prediction (AMVP) mode, the method comprising:
reconstructing a reference picture index and a differential motion vector of a current Prediction Unit (PU);
searching an effective spatial AMVP candidate for the current PU;
searching an effective temporal AMVP candidate for the current PU;
generating an AMVP candidate list using the effective spatial and temporal AMVP candidates;
adding a motion vector having a predetermined value as a candidate to the AMVP candidate list, when the number of the effective AMVP candidates is smaller than a predetermined number;
determining a motion vector corresponding to an AMVP index of the current PU from among motion vectors included in the AMVP candidate list to be a motion vector predictor of the current PU;
reconstructing a motion vector of the current PU using the differential motion vector and the motion vector predictor; and
generating a prediction block corresponding to a position indicated by the reconstructed motion vector within a reference picture indicated by the reference picture index.
2. The method according to claim 1 , wherein the generating of the AMVP candidate list comprises deleting lower-ranked AMVP candidate in the AMVP candidate list when there are two or more AMVP candidates having the same motion vector.
3. The method according to claim 1 , wherein the search of an effective temporal AMVP candidate comprises:
determining a temporal AMVP candidate picture; and
determining a temporal AMVP candidate block in the temporal AMVP candidate picture,
wherein the temporal AMVP candidate picture is determined based on a slice type.
4. The method according to claim 3 , wherein the temporal AMVP candidate picture is a picture with reference picture index 0.
5. The method according to claim 3 , wherein the temporal AMVP candidate block is a first effective block resulting from scanning the first and second candidate blocks sequentially.
6. The method according to claim 1 , wherein a motion vector of the spatial AMVP candidate for the current PU is determined based on a size and position of the current PU.
7. The method according to claim 6 , wherein when the current PU is positioned at an upper boundary of a Largest Coding Unit (LCU), a motion vector of a spatial AMVP candidate above the current PU is a motion vector of a PU corresponding to the AMVP candidate, or a motion vector of a left or right PU adjacent to the AMVP candidate.
8. The method according to claim 1 , wherein the predetermined number is 2.
Priority Applications (18)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/083,232 US9948945B2 (en) | 2011-08-29 | 2013-11-18 | Method for generating prediction block in AMVP mode |
US14/586,471 US9800887B2 (en) | 2011-08-29 | 2014-12-30 | Method for generating prediction block in AMVP mode |
US14/586,406 US20150110197A1 (en) | 2011-08-29 | 2014-12-30 | Method for generating prediction block in amvp mode |
US15/335,015 US20170048539A1 (en) | 2011-08-29 | 2016-10-26 | Method for generating prediction block in amvp mode |
US15/708,740 US10123033B2 (en) | 2011-08-29 | 2017-09-19 | Method for generating prediction block in AMVP mode |
US15/708,526 US10123032B2 (en) | 2011-08-29 | 2017-09-19 | Method for generating prediction block in AMVP mode |
US15/879,249 US10123035B2 (en) | 2011-08-29 | 2018-01-24 | Method for generating prediction block in AMVP mode |
US15/879,236 US10123034B2 (en) | 2011-08-29 | 2018-01-24 | Method for generating prediction block in AMVP mode |
US15/879,230 US10148976B2 (en) | 2011-08-29 | 2018-01-24 | Method for generating prediction block in AMVP mode |
US16/165,185 US10798401B2 (en) | 2011-08-29 | 2018-10-19 | Method for generating prediction block in AMVP mode |
US17/061,676 US11350121B2 (en) | 2011-08-29 | 2020-10-02 | Method for generating prediction block in AMVP mode |
US17/526,747 US11689734B2 (en) | 2011-08-29 | 2021-11-15 | Method for generating prediction block in AMVP mode |
US17/672,272 US11778225B2 (en) | 2011-08-29 | 2022-02-15 | Method for generating prediction block in AMVP mode |
US18/314,888 US12034959B2 (en) | 2011-08-29 | 2023-05-10 | Method for generating prediction block in AMVP mode |
US18/314,890 US12022103B2 (en) | 2011-08-29 | 2023-05-10 | Method for generating prediction block in AMVP mode |
US18/314,898 US12028544B2 (en) | 2011-08-29 | 2023-05-10 | Method for generating prediction block in AMVP mode |
US18/314,903 US12034960B2 (en) | 2011-08-29 | 2023-05-10 | Method for generating prediction block in AMVP mode |
US18/737,070 US20240323426A1 (en) | 2011-08-29 | 2024-06-07 | Method for generating prediction block in amvp mode |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2011-0086518 | 2011-08-29 | ||
KR20110086518 | 2011-08-29 | ||
PCT/KR2012/000522 WO2013032073A1 (en) | 2011-08-29 | 2012-01-20 | Method for generating prediction block in amvp mode |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2012/000522 Continuation WO2013032073A1 (en) | 2011-08-29 | 2012-01-20 | Method for generating prediction block in amvp mode |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/083,232 Continuation US9948945B2 (en) | 2011-08-29 | 2013-11-18 | Method for generating prediction block in AMVP mode |
Publications (1)
Publication Number | Publication Date |
---|---|
US20130128982A1 true US20130128982A1 (en) | 2013-05-23 |
Family
ID=47756521
Family Applications (19)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/742,058 Abandoned US20130128982A1 (en) | 2011-08-29 | 2013-01-15 | Method for generating prediction block in amvp mode |
US14/083,232 Active US9948945B2 (en) | 2011-08-29 | 2013-11-18 | Method for generating prediction block in AMVP mode |
US14/586,406 Abandoned US20150110197A1 (en) | 2011-08-29 | 2014-12-30 | Method for generating prediction block in amvp mode |
US14/586,471 Active US9800887B2 (en) | 2011-08-29 | 2014-12-30 | Method for generating prediction block in AMVP mode |
US15/335,015 Abandoned US20170048539A1 (en) | 2011-08-29 | 2016-10-26 | Method for generating prediction block in amvp mode |
US15/708,526 Active US10123032B2 (en) | 2011-08-29 | 2017-09-19 | Method for generating prediction block in AMVP mode |
US15/708,740 Active US10123033B2 (en) | 2011-08-29 | 2017-09-19 | Method for generating prediction block in AMVP mode |
US15/879,236 Active US10123034B2 (en) | 2011-08-29 | 2018-01-24 | Method for generating prediction block in AMVP mode |
US15/879,230 Active US10148976B2 (en) | 2011-08-29 | 2018-01-24 | Method for generating prediction block in AMVP mode |
US15/879,249 Active US10123035B2 (en) | 2011-08-29 | 2018-01-24 | Method for generating prediction block in AMVP mode |
US16/165,185 Active US10798401B2 (en) | 2011-08-29 | 2018-10-19 | Method for generating prediction block in AMVP mode |
US17/061,676 Active US11350121B2 (en) | 2011-08-29 | 2020-10-02 | Method for generating prediction block in AMVP mode |
US17/526,747 Active US11689734B2 (en) | 2011-08-29 | 2021-11-15 | Method for generating prediction block in AMVP mode |
US17/672,272 Active US11778225B2 (en) | 2011-08-29 | 2022-02-15 | Method for generating prediction block in AMVP mode |
US18/314,890 Active US12022103B2 (en) | 2011-08-29 | 2023-05-10 | Method for generating prediction block in AMVP mode |
US18/314,898 Active US12028544B2 (en) | 2011-08-29 | 2023-05-10 | Method for generating prediction block in AMVP mode |
US18/314,903 Active US12034960B2 (en) | 2011-08-29 | 2023-05-10 | Method for generating prediction block in AMVP mode |
US18/314,888 Active US12034959B2 (en) | 2011-08-29 | 2023-05-10 | Method for generating prediction block in AMVP mode |
US18/737,070 Pending US20240323426A1 (en) | 2011-08-29 | 2024-06-07 | Method for generating prediction block in amvp mode |
Family Applications After (18)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/083,232 Active US9948945B2 (en) | 2011-08-29 | 2013-11-18 | Method for generating prediction block in AMVP mode |
US14/586,406 Abandoned US20150110197A1 (en) | 2011-08-29 | 2014-12-30 | Method for generating prediction block in amvp mode |
US14/586,471 Active US9800887B2 (en) | 2011-08-29 | 2014-12-30 | Method for generating prediction block in AMVP mode |
US15/335,015 Abandoned US20170048539A1 (en) | 2011-08-29 | 2016-10-26 | Method for generating prediction block in amvp mode |
US15/708,526 Active US10123032B2 (en) | 2011-08-29 | 2017-09-19 | Method for generating prediction block in AMVP mode |
US15/708,740 Active US10123033B2 (en) | 2011-08-29 | 2017-09-19 | Method for generating prediction block in AMVP mode |
US15/879,236 Active US10123034B2 (en) | 2011-08-29 | 2018-01-24 | Method for generating prediction block in AMVP mode |
US15/879,230 Active US10148976B2 (en) | 2011-08-29 | 2018-01-24 | Method for generating prediction block in AMVP mode |
US15/879,249 Active US10123035B2 (en) | 2011-08-29 | 2018-01-24 | Method for generating prediction block in AMVP mode |
US16/165,185 Active US10798401B2 (en) | 2011-08-29 | 2018-10-19 | Method for generating prediction block in AMVP mode |
US17/061,676 Active US11350121B2 (en) | 2011-08-29 | 2020-10-02 | Method for generating prediction block in AMVP mode |
US17/526,747 Active US11689734B2 (en) | 2011-08-29 | 2021-11-15 | Method for generating prediction block in AMVP mode |
US17/672,272 Active US11778225B2 (en) | 2011-08-29 | 2022-02-15 | Method for generating prediction block in AMVP mode |
US18/314,890 Active US12022103B2 (en) | 2011-08-29 | 2023-05-10 | Method for generating prediction block in AMVP mode |
US18/314,898 Active US12028544B2 (en) | 2011-08-29 | 2023-05-10 | Method for generating prediction block in AMVP mode |
US18/314,903 Active US12034960B2 (en) | 2011-08-29 | 2023-05-10 | Method for generating prediction block in AMVP mode |
US18/314,888 Active US12034959B2 (en) | 2011-08-29 | 2023-05-10 | Method for generating prediction block in AMVP mode |
US18/737,070 Pending US20240323426A1 (en) | 2011-08-29 | 2024-06-07 | Method for generating prediction block in amvp mode |
Country Status (6)
Country | Link |
---|---|
US (19) | US20130128982A1 (en) |
KR (4) | KR20140057683A (en) |
CN (7) | CN107277547B (en) |
BR (1) | BR112014004914B1 (en) |
MX (4) | MX351933B (en) |
WO (1) | WO2013032073A1 (en) |
Cited By (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130022122A1 (en) * | 2010-08-17 | 2013-01-24 | Soo Mi Oh | Method of encoding moving picture in inter prediction mode |
US20130077884A1 (en) * | 2010-06-03 | 2013-03-28 | Sharp Kabushiki Kaisha | Filter device, image decoding device, image encoding device, and filter parameter data structure |
US20160191943A1 (en) * | 2010-12-14 | 2016-06-30 | M & K Holdings Inc | Apparatus for decoding a moving picture |
US9591325B2 (en) | 2015-01-27 | 2017-03-07 | Microsoft Technology Licensing, Llc | Special case handling for merged chroma blocks in intra block copy prediction mode |
US10148977B2 (en) * | 2015-06-16 | 2018-12-04 | Futurewei Technologies, Inc. | Advanced coding techniques for high efficiency video coding (HEVC) screen content coding (SCC) extensions |
US20190174136A1 (en) * | 2016-08-11 | 2019-06-06 | Electronics And Telecommunications Research Institute | Method and apparatus for image encoding/decoding |
US10368091B2 (en) | 2014-03-04 | 2019-07-30 | Microsoft Technology Licensing, Llc | Block flipping and skip mode in intra block copy prediction |
US10390034B2 (en) | 2014-01-03 | 2019-08-20 | Microsoft Technology Licensing, Llc | Innovations in block vector prediction and estimation of reconstructed sample values within an overlap area |
US10469863B2 (en) | 2014-01-03 | 2019-11-05 | Microsoft Technology Licensing, Llc | Block vector prediction in video and image coding/decoding |
US10506254B2 (en) | 2013-10-14 | 2019-12-10 | Microsoft Technology Licensing, Llc | Features of base color index map mode for video and image coding and decoding |
WO2020003274A1 (en) * | 2018-06-29 | 2020-01-02 | Beijing Bytedance Network Technology Co., Ltd. | Checking order of motion candidates in lut |
US10542274B2 (en) | 2014-02-21 | 2020-01-21 | Microsoft Technology Licensing, Llc | Dictionary encoding and decoding of screen content |
US10582213B2 (en) | 2013-10-14 | 2020-03-03 | Microsoft Technology Licensing, Llc | Features of intra block copy prediction mode for video and image coding and decoding |
CN110876058A (en) * | 2018-08-30 | 2020-03-10 | 华为技术有限公司 | Historical candidate list updating method and device |
US10630974B2 (en) * | 2017-05-30 | 2020-04-21 | Google Llc | Coding of intra-prediction modes |
US10659783B2 (en) | 2015-06-09 | 2020-05-19 | Microsoft Technology Licensing, Llc | Robust encoding/decoding of escape-coded pixels in palette mode |
US10778997B2 (en) | 2018-06-29 | 2020-09-15 | Beijing Bytedance Network Technology Co., Ltd. | Resetting of look up table per slice/tile/LCU row |
US10785486B2 (en) | 2014-06-19 | 2020-09-22 | Microsoft Technology Licensing, Llc | Unified intra block copy and inter prediction modes |
US10812817B2 (en) | 2014-09-30 | 2020-10-20 | Microsoft Technology Licensing, Llc | Rules for intra-picture prediction modes when wavefront parallel processing is enabled |
US10873756B2 (en) | 2018-06-29 | 2020-12-22 | Beijing Bytedance Network Technology Co., Ltd. | Interaction between LUT and AMVP |
US20200413044A1 (en) | 2018-09-12 | 2020-12-31 | Beijing Bytedance Network Technology Co., Ltd. | Conditions for starting checking hmvp candidates depend on total number minus k |
US10986349B2 (en) | 2017-12-29 | 2021-04-20 | Microsoft Technology Licensing, Llc | Constraints on locations of reference blocks for intra block copy prediction |
US11109036B2 (en) | 2013-10-14 | 2021-08-31 | Microsoft Technology Licensing, Llc | Encoder-side options for intra block copy prediction mode for video and image coding |
US11134244B2 (en) | 2018-07-02 | 2021-09-28 | Beijing Bytedance Network Technology Co., Ltd. | Order of rounding and pruning in LAMVR |
US11134267B2 (en) | 2018-06-29 | 2021-09-28 | Beijing Bytedance Network Technology Co., Ltd. | Update of look up table: FIFO, constrained FIFO |
US11140383B2 (en) | 2019-01-13 | 2021-10-05 | Beijing Bytedance Network Technology Co., Ltd. | Interaction between look up table and shared merge list |
US11146785B2 (en) | 2018-06-29 | 2021-10-12 | Beijing Bytedance Network Technology Co., Ltd. | Selection of coded motion information for LUT updating |
US11159807B2 (en) | 2018-06-29 | 2021-10-26 | Beijing Bytedance Network Technology Co., Ltd. | Number of motion candidates in a look up table to be checked according to mode |
US11159817B2 (en) | 2018-06-29 | 2021-10-26 | Beijing Bytedance Network Technology Co., Ltd. | Conditions for updating LUTS |
US20210352316A1 (en) * | 2011-11-07 | 2021-11-11 | Infobridge Pte. Ltd. | Apparatus for decoding video data |
US11284103B2 (en) | 2014-01-17 | 2022-03-22 | Microsoft Technology Licensing, Llc | Intra block copy prediction with asymmetric partitions and encoder-side search patterns, search ranges and approaches to partitioning |
RU2774105C2 (en) * | 2018-06-29 | 2022-06-16 | Бейджин Байтдэнс Нетворк Текнолоджи Ко., Лтд. | Partial/full pruning when adding an hmvp candidate for merging/amvp |
US11438576B2 (en) * | 2019-03-08 | 2022-09-06 | Tencent America LLC | Merge list construction in triangular prediction |
US11528500B2 (en) | 2018-06-29 | 2022-12-13 | Beijing Bytedance Network Technology Co., Ltd. | Partial/full pruning when adding a HMVP candidate to merge/AMVP |
US11589071B2 (en) | 2019-01-10 | 2023-02-21 | Beijing Bytedance Network Technology Co., Ltd. | Invoke of LUT updating |
US11641483B2 (en) | 2019-03-22 | 2023-05-02 | Beijing Bytedance Network Technology Co., Ltd. | Interaction between merge list construction and other tools |
US11895318B2 (en) | 2018-06-29 | 2024-02-06 | Beijing Bytedance Network Technology Co., Ltd | Concept of using one or multiple look up tables to store motion information of previously coded in order and use them to code following blocks |
US11956464B2 (en) | 2019-01-16 | 2024-04-09 | Beijing Bytedance Network Technology Co., Ltd | Inserting order of motion candidates in LUT |
Families Citing this family (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105611298B (en) | 2010-12-13 | 2019-06-18 | 韩国电子通信研究院 | The inter-frame prediction method executed using decoding device |
KR20120140181A (en) * | 2011-06-20 | 2012-12-28 | 한국전자통신연구원 | Method and apparatus for encoding and decoding using filtering for prediction block boundary |
CN110446038B (en) * | 2011-11-08 | 2022-01-04 | 韩国电子通信研究院 | Method and apparatus for sharing a candidate list |
KR20160072105A (en) * | 2013-10-18 | 2016-06-22 | 엘지전자 주식회사 | Video decoding apparatus and method for decoding multi-view video |
JP6374605B2 (en) * | 2014-10-07 | 2018-08-15 | サムスン エレクトロニクス カンパニー リミテッド | Method, apparatus and recording medium for decoding video |
US20170085886A1 (en) * | 2015-09-18 | 2017-03-23 | Qualcomm Incorporated | Variable partition size for block prediction mode for display stream compression (dsc) |
EP3335215B1 (en) * | 2016-03-21 | 2020-05-13 | Huawei Technologies Co., Ltd. | Adaptive quantization of weighted matrix coefficients |
KR102275420B1 (en) * | 2016-07-12 | 2021-07-09 | 한국전자통신연구원 | A method for encoding/decoding a video and a readable medium therefor |
US11095892B2 (en) * | 2016-09-20 | 2021-08-17 | Kt Corporation | Method and apparatus for processing video signal |
CN110140355B (en) * | 2016-12-27 | 2022-03-08 | 联发科技股份有限公司 | Method and device for fine adjustment of bidirectional template motion vector for video coding and decoding |
KR20180111378A (en) * | 2017-03-31 | 2018-10-11 | 주식회사 칩스앤미디어 | A method of video processing providing independent properties between coding tree units and coding units, a method and appratus for decoding and encoding video using the processing. |
KR20210115052A (en) | 2017-07-07 | 2021-09-24 | 삼성전자주식회사 | Apparatus and method for encoding motion vector determined using adaptive motion vector resolution, and apparatus and method for decoding motion vector |
US11432003B2 (en) | 2017-09-28 | 2022-08-30 | Samsung Electronics Co., Ltd. | Encoding method and apparatus therefor, and decoding method and apparatus therefor |
CN107613305B (en) * | 2017-10-12 | 2020-04-07 | 杭州当虹科技股份有限公司 | P, B frame rapid motion estimation method in HEVC |
GB2588003B (en) | 2018-06-05 | 2023-04-19 | Beijing Bytedance Network Tech Co Ltd | Interaction between pairwise average merging candidates and IBC |
WO2019244056A1 (en) * | 2018-06-19 | 2019-12-26 | Beijing Bytedance Network Technology Co., Ltd. | Multi-candidates of different precisions |
WO2019244117A1 (en) | 2018-06-21 | 2019-12-26 | Beijing Bytedance Network Technology Co., Ltd. | Unified constrains for the merge affine mode and the non-merge affine mode |
EP3788782A1 (en) | 2018-06-21 | 2021-03-10 | Beijing Bytedance Network Technology Co. Ltd. | Sub-block mv inheritance between color components |
WO2020003273A1 (en) * | 2018-06-29 | 2020-01-02 | Beijing Bytedance Network Technology Co., Ltd. | Extended merge mode |
TWI731358B (en) | 2018-06-29 | 2021-06-21 | 大陸商北京字節跳動網絡技術有限公司 | Improved tmvp derivation |
CN116033150A (en) | 2018-09-08 | 2023-04-28 | 北京字节跳动网络技术有限公司 | Affine pattern calculation for different video block sizes |
TWI827681B (en) | 2018-09-19 | 2024-01-01 | 大陸商北京字節跳動網絡技術有限公司 | Syntax reuse for affine mode with adaptive motion vector resolution |
CN117768651A (en) | 2018-09-24 | 2024-03-26 | 北京字节跳动网络技术有限公司 | Method, apparatus, medium, and bit stream storage method for processing video data |
WO2020071846A1 (en) * | 2018-10-06 | 2020-04-09 | 엘지전자 주식회사 | Method and apparatus for processing video signal by using intra-prediction |
CN116886926A (en) * | 2018-11-10 | 2023-10-13 | 北京字节跳动网络技术有限公司 | Rounding in paired average candidate calculation |
EP3905684B1 (en) * | 2018-12-29 | 2024-09-04 | SZ DJI Technology Co., Ltd. | Video processing method and device |
CN112042191B (en) * | 2019-01-01 | 2024-03-19 | Lg电子株式会社 | Method and apparatus for predictive processing of video signals based on history-based motion vectors |
WO2020156516A1 (en) | 2019-01-31 | 2020-08-06 | Beijing Bytedance Network Technology Co., Ltd. | Context for coding affine mode adaptive motion vector resolution |
CN113366839B (en) * | 2019-01-31 | 2024-01-12 | 北京字节跳动网络技术有限公司 | Refinement quantization step in video codec |
WO2020156517A1 (en) | 2019-01-31 | 2020-08-06 | Beijing Bytedance Network Technology Co., Ltd. | Fast algorithms for symmetric motion vector difference coding mode |
CN110809161B (en) * | 2019-03-11 | 2020-12-29 | 杭州海康威视数字技术股份有限公司 | Motion information candidate list construction method and device |
US11611759B2 (en) * | 2019-05-24 | 2023-03-21 | Qualcomm Incorporated | Merge mode coding for video coding |
US11297320B2 (en) * | 2020-01-10 | 2022-04-05 | Mediatek Inc. | Signaling quantization related parameters |
US11968356B2 (en) * | 2022-03-16 | 2024-04-23 | Qualcomm Incorporated | Decoder-side motion vector refinement (DMVR) inter prediction using shared interpolation filters and reference pixels |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120263231A1 (en) * | 2011-04-18 | 2012-10-18 | Minhua Zhou | Temporal Motion Data Candidate Derivation in Video Coding |
US20120320984A1 (en) * | 2011-06-14 | 2012-12-20 | Minhua Zhou | Inter-Prediction Candidate Index Coding Independent of Inter-Prediction Candidate List Construction in Video Coding |
US20120320969A1 (en) * | 2011-06-20 | 2012-12-20 | Qualcomm Incorporated | Unified merge mode and adaptive motion vector prediction mode candidates selection |
US20130163668A1 (en) * | 2011-12-22 | 2013-06-27 | Qualcomm Incorporated | Performing motion vector prediction for video coding |
Family Cites Families (52)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5778143A (en) * | 1993-01-13 | 1998-07-07 | Hitachi America, Ltd. | Method and apparatus for the selection of data for use in VTR trick playback operation in a system using progressive picture refresh |
DK1175670T4 (en) * | 1999-04-16 | 2007-11-19 | Dolby Lab Licensing Corp | Audio coding using gain adaptive quantification and symbols of unequal length |
JP2002112268A (en) * | 2000-09-29 | 2002-04-12 | Toshiba Corp | Compressed image data decoding apparatus |
KR20020088086A (en) * | 2001-01-23 | 2002-11-25 | 코닌클리케 필립스 일렉트로닉스 엔.브이. | Watermarking a compressed information signal |
US7920624B2 (en) * | 2002-04-01 | 2011-04-05 | Broadcom Corporation | Inverse quantizer supporting multiple decoding processes |
JP4130783B2 (en) * | 2002-04-23 | 2008-08-06 | 松下電器産業株式会社 | Motion vector encoding method and motion vector decoding method |
JP2004023458A (en) * | 2002-06-17 | 2004-01-22 | Toshiba Corp | Moving picture encoding/decoding method and apparatus |
CN1225127C (en) * | 2003-09-12 | 2005-10-26 | 中国科学院计算技术研究所 | A coding/decoding end bothway prediction method for video coding |
US7561620B2 (en) * | 2004-08-03 | 2009-07-14 | Microsoft Corporation | System and process for compressing and decompressing multiple, layered, video streams employing spatial and temporal encoding |
US7720154B2 (en) * | 2004-11-12 | 2010-05-18 | Industrial Technology Research Institute | System and method for fast variable-size motion estimation |
KR20060105810A (en) | 2005-04-04 | 2006-10-11 | 엘지전자 주식회사 | Method for call forwarding using radio communication and system thereof |
KR100733966B1 (en) | 2005-04-13 | 2007-06-29 | 한국전자통신연구원 | Apparatus and Method for Predicting Motion Vector |
US8422546B2 (en) * | 2005-05-25 | 2013-04-16 | Microsoft Corporation | Adaptive video encoding using a perceptual model |
KR100727990B1 (en) * | 2005-10-01 | 2007-06-13 | 삼성전자주식회사 | Intra prediction encoding method and encoder thereof |
EP2501178B1 (en) * | 2006-02-07 | 2013-10-30 | Nec Corporation | Mobile communication system, wireless base station controllers and relocation method |
US8249182B2 (en) * | 2007-03-28 | 2012-08-21 | Panasonic Corporation | Decoding circuit, decoding method, encoding circuit, and encoding method |
US8488668B2 (en) * | 2007-06-15 | 2013-07-16 | Qualcomm Incorporated | Adaptive coefficient scanning for video coding |
EP2227020B1 (en) * | 2007-09-28 | 2014-08-13 | Dolby Laboratories Licensing Corporation | Video compression and transmission techniques |
KR100928325B1 (en) * | 2007-10-15 | 2009-11-25 | 세종대학교산학협력단 | Image encoding and decoding method and apparatus |
CN101472174A (en) * | 2007-12-29 | 2009-07-01 | 智多微电子(上海)有限公司 | Method and device for recuperating original image data in video decoder |
CN101610413B (en) * | 2009-07-29 | 2011-04-27 | 清华大学 | Video coding/decoding method and device |
KR101671460B1 (en) | 2009-09-10 | 2016-11-02 | 에스케이 텔레콤주식회사 | Motion Vector Coding Method and Apparatus and Video Coding Method and Apparatus by Using Same |
US9060176B2 (en) * | 2009-10-01 | 2015-06-16 | Ntt Docomo, Inc. | Motion vector prediction in video coding |
US20110274162A1 (en) * | 2010-05-04 | 2011-11-10 | Minhua Zhou | Coding Unit Quantization Parameters in Video Coding |
KR20110090841A (en) * | 2010-02-02 | 2011-08-10 | (주)휴맥스 | Apparatus and method for encoding/decoding of video using weighted prediction |
JP2011160359A (en) | 2010-02-03 | 2011-08-18 | Sharp Corp | Device and method for predicting amount of block noise, image processor, program, and recording medium |
CN102439978A (en) * | 2010-03-12 | 2012-05-02 | 联发科技(新加坡)私人有限公司 | Motion prediction methods |
KR101752418B1 (en) * | 2010-04-09 | 2017-06-29 | 엘지전자 주식회사 | A method and an apparatus for processing a video signal |
KR101484281B1 (en) * | 2010-07-09 | 2015-01-21 | 삼성전자주식회사 | Method and apparatus for video encoding using block merging, method and apparatus for video decoding using block merging |
US9124898B2 (en) * | 2010-07-12 | 2015-09-01 | Mediatek Inc. | Method and apparatus of temporal motion vector prediction |
KR101373814B1 (en) * | 2010-07-31 | 2014-03-18 | 엠앤케이홀딩스 주식회사 | Apparatus of generating prediction block |
KR20120012385A (en) * | 2010-07-31 | 2012-02-09 | 오수미 | Intra prediction coding apparatus |
KR20120016991A (en) * | 2010-08-17 | 2012-02-27 | 오수미 | Inter prediction process |
KR101861714B1 (en) * | 2010-09-02 | 2018-05-28 | 엘지전자 주식회사 | Method for encoding and decoding video, and apparatus using same |
CN102006480B (en) * | 2010-11-29 | 2013-01-30 | 清华大学 | Method for coding and decoding binocular stereoscopic video based on inter-view prediction |
CN107071460B (en) * | 2010-12-14 | 2020-03-06 | M&K控股株式会社 | Apparatus for encoding moving picture |
US9473789B2 (en) * | 2010-12-14 | 2016-10-18 | M&K Holdings Inc. | Apparatus for decoding a moving picture |
US9609349B2 (en) * | 2010-12-14 | 2017-03-28 | M & K Holdings Inc. | Apparatus for decoding a moving picture |
KR101831311B1 (en) * | 2010-12-31 | 2018-02-23 | 한국전자통신연구원 | Method and apparatus for encoding and decoding video information |
US8737480B2 (en) * | 2011-01-14 | 2014-05-27 | Motorola Mobility Llc | Joint spatial and temporal block merge mode for HEVC |
WO2012117728A1 (en) * | 2011-03-03 | 2012-09-07 | パナソニック株式会社 | Video image encoding method, video image decoding method, video image encoding device, video image decoding device, and video image encoding/decoding device |
US9066110B2 (en) * | 2011-03-08 | 2015-06-23 | Texas Instruments Incorporated | Parsing friendly and error resilient merge flag coding in video coding |
US9288501B2 (en) * | 2011-03-08 | 2016-03-15 | Qualcomm Incorporated | Motion vector predictors (MVPs) for bi-predictive inter mode in video coding |
US9648334B2 (en) * | 2011-03-21 | 2017-05-09 | Qualcomm Incorporated | Bi-predictive merge mode based on uni-predictive neighbors in video coding |
US9485517B2 (en) * | 2011-04-20 | 2016-11-01 | Qualcomm Incorporated | Motion vector prediction with motion vectors from multiple views in multi-view video coding |
US8896284B2 (en) * | 2011-06-28 | 2014-11-25 | Texas Instruments Incorporated | DC-DC converter using internal ripple with the DCM function |
CN104837024B (en) * | 2011-08-29 | 2016-04-27 | 苗太平洋控股有限公司 | For the device of the movable information under merging patterns of decoding |
US9083983B2 (en) * | 2011-10-04 | 2015-07-14 | Qualcomm Incorporated | Motion vector predictor candidate clipping removal for video coding |
WO2013099244A1 (en) * | 2011-12-28 | 2013-07-04 | 株式会社Jvcケンウッド | Video encoding device, video encoding method, video encoding program, video decoding device, video decoding method, video decoding program |
US9794870B2 (en) * | 2013-06-28 | 2017-10-17 | Intel Corporation | User equipment and method for user equipment feedback of flow-to-rat mapping preferences |
JP6171627B2 (en) * | 2013-06-28 | 2017-08-02 | 株式会社Jvcケンウッド | Image encoding device, image encoding method, image encoding program, image decoding device, image decoding method, and image decoding program |
US9432685B2 (en) * | 2013-12-06 | 2016-08-30 | Qualcomm Incorporated | Scalable implementation for parallel motion estimation regions |
-
2012
- 2012-01-20 CN CN201710357777.7A patent/CN107277547B/en active Active
- 2012-01-20 CN CN201611152704.6A patent/CN106851311B/en active Active
- 2012-01-20 CN CN201710357011.9A patent/CN107197272B/en active Active
- 2012-01-20 KR KR1020147011165A patent/KR20140057683A/en not_active Application Discontinuation
- 2012-01-20 CN CN201280042121.1A patent/CN103765886B/en active Active
- 2012-01-20 CN CN201510012849.5A patent/CN104883576B/en active Active
- 2012-01-20 WO PCT/KR2012/000522 patent/WO2013032073A1/en active Application Filing
- 2012-01-20 MX MX2016014512A patent/MX351933B/en unknown
- 2012-01-20 KR KR1020147011166A patent/KR101492104B1/en active IP Right Review Request
- 2012-01-20 KR KR1020127006460A patent/KR101210892B1/en active IP Right Review Request
- 2012-01-20 MX MX2014002346A patent/MX343471B/en active IP Right Grant
- 2012-01-20 MX MX2017014038A patent/MX365013B/en unknown
- 2012-01-20 CN CN201710358284.5A patent/CN107257480B/en active Active
- 2012-01-20 KR KR1020147011164A patent/KR101492105B1/en active IP Right Review Request
- 2012-01-20 CN CN201710358282.6A patent/CN107277548B/en active Active
- 2012-01-20 MX MX2019005839A patent/MX2019005839A/en unknown
- 2012-01-20 BR BR112014004914-9A patent/BR112014004914B1/en active IP Right Grant
-
2013
- 2013-01-15 US US13/742,058 patent/US20130128982A1/en not_active Abandoned
- 2013-11-18 US US14/083,232 patent/US9948945B2/en active Active
-
2014
- 2014-12-30 US US14/586,406 patent/US20150110197A1/en not_active Abandoned
- 2014-12-30 US US14/586,471 patent/US9800887B2/en active Active
-
2016
- 2016-10-26 US US15/335,015 patent/US20170048539A1/en not_active Abandoned
-
2017
- 2017-09-19 US US15/708,526 patent/US10123032B2/en active Active
- 2017-09-19 US US15/708,740 patent/US10123033B2/en active Active
-
2018
- 2018-01-24 US US15/879,236 patent/US10123034B2/en active Active
- 2018-01-24 US US15/879,230 patent/US10148976B2/en active Active
- 2018-01-24 US US15/879,249 patent/US10123035B2/en active Active
- 2018-10-19 US US16/165,185 patent/US10798401B2/en active Active
-
2020
- 2020-10-02 US US17/061,676 patent/US11350121B2/en active Active
-
2021
- 2021-11-15 US US17/526,747 patent/US11689734B2/en active Active
-
2022
- 2022-02-15 US US17/672,272 patent/US11778225B2/en active Active
-
2023
- 2023-05-10 US US18/314,890 patent/US12022103B2/en active Active
- 2023-05-10 US US18/314,898 patent/US12028544B2/en active Active
- 2023-05-10 US US18/314,903 patent/US12034960B2/en active Active
- 2023-05-10 US US18/314,888 patent/US12034959B2/en active Active
-
2024
- 2024-06-07 US US18/737,070 patent/US20240323426A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120263231A1 (en) * | 2011-04-18 | 2012-10-18 | Minhua Zhou | Temporal Motion Data Candidate Derivation in Video Coding |
US20120320984A1 (en) * | 2011-06-14 | 2012-12-20 | Minhua Zhou | Inter-Prediction Candidate Index Coding Independent of Inter-Prediction Candidate List Construction in Video Coding |
US20120320969A1 (en) * | 2011-06-20 | 2012-12-20 | Qualcomm Incorporated | Unified merge mode and adaptive motion vector prediction mode candidates selection |
US20120320968A1 (en) * | 2011-06-20 | 2012-12-20 | Qualcomm Incorporated | Unified merge mode and adaptive motion vector prediction mode candidates selection |
US20130163668A1 (en) * | 2011-12-22 | 2013-06-27 | Qualcomm Incorporated | Performing motion vector prediction for video coding |
Cited By (70)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130077884A1 (en) * | 2010-06-03 | 2013-03-28 | Sharp Kabushiki Kaisha | Filter device, image decoding device, image encoding device, and filter parameter data structure |
US8805100B2 (en) * | 2010-06-03 | 2014-08-12 | Sharp Kabushiki Kaisha | Filter device, image decoding device, image encoding device, and filter parameter data structure |
US9544611B2 (en) * | 2010-08-17 | 2017-01-10 | M & K Holdings Inc. | Apparatus for decoding moving picture |
US20130022122A1 (en) * | 2010-08-17 | 2013-01-24 | Soo Mi Oh | Method of encoding moving picture in inter prediction mode |
US20160191943A1 (en) * | 2010-12-14 | 2016-06-30 | M & K Holdings Inc | Apparatus for decoding a moving picture |
US9467713B2 (en) * | 2010-12-14 | 2016-10-11 | M&K Holdings Inc. | Apparatus for decoding a moving picture |
US11997307B2 (en) * | 2011-11-07 | 2024-05-28 | Gensquare Llc | Apparatus for decoding video data |
US20210352316A1 (en) * | 2011-11-07 | 2021-11-11 | Infobridge Pte. Ltd. | Apparatus for decoding video data |
US11109036B2 (en) | 2013-10-14 | 2021-08-31 | Microsoft Technology Licensing, Llc | Encoder-side options for intra block copy prediction mode for video and image coding |
US10506254B2 (en) | 2013-10-14 | 2019-12-10 | Microsoft Technology Licensing, Llc | Features of base color index map mode for video and image coding and decoding |
US10582213B2 (en) | 2013-10-14 | 2020-03-03 | Microsoft Technology Licensing, Llc | Features of intra block copy prediction mode for video and image coding and decoding |
US10390034B2 (en) | 2014-01-03 | 2019-08-20 | Microsoft Technology Licensing, Llc | Innovations in block vector prediction and estimation of reconstructed sample values within an overlap area |
US10469863B2 (en) | 2014-01-03 | 2019-11-05 | Microsoft Technology Licensing, Llc | Block vector prediction in video and image coding/decoding |
US11284103B2 (en) | 2014-01-17 | 2022-03-22 | Microsoft Technology Licensing, Llc | Intra block copy prediction with asymmetric partitions and encoder-side search patterns, search ranges and approaches to partitioning |
US10542274B2 (en) | 2014-02-21 | 2020-01-21 | Microsoft Technology Licensing, Llc | Dictionary encoding and decoding of screen content |
US10368091B2 (en) | 2014-03-04 | 2019-07-30 | Microsoft Technology Licensing, Llc | Block flipping and skip mode in intra block copy prediction |
US10785486B2 (en) | 2014-06-19 | 2020-09-22 | Microsoft Technology Licensing, Llc | Unified intra block copy and inter prediction modes |
US10812817B2 (en) | 2014-09-30 | 2020-10-20 | Microsoft Technology Licensing, Llc | Rules for intra-picture prediction modes when wavefront parallel processing is enabled |
US9591325B2 (en) | 2015-01-27 | 2017-03-07 | Microsoft Technology Licensing, Llc | Special case handling for merged chroma blocks in intra block copy prediction mode |
US10659783B2 (en) | 2015-06-09 | 2020-05-19 | Microsoft Technology Licensing, Llc | Robust encoding/decoding of escape-coded pixels in palette mode |
US10148977B2 (en) * | 2015-06-16 | 2018-12-04 | Futurewei Technologies, Inc. | Advanced coding techniques for high efficiency video coding (HEVC) screen content coding (SCC) extensions |
US12058345B2 (en) | 2016-08-11 | 2024-08-06 | Lx Semicon Co., Ltd. | Method and apparatus for encoding/decoding a video using a motion compensation |
US11743473B2 (en) | 2016-08-11 | 2023-08-29 | Lx Semicon Co., Ltd. | Method and apparatus for encoding/decoding a video using a motion compensation |
US11336899B2 (en) * | 2016-08-11 | 2022-05-17 | Electronics And Telecommunications Research Institute | Method and apparatus for encoding/decoding a video using a motion compensation |
US12132915B2 (en) | 2016-08-11 | 2024-10-29 | Lx Semicon Co., Ltd. | Method and apparatus for encoding/decoding a video using a motion compensation |
US20190174136A1 (en) * | 2016-08-11 | 2019-06-06 | Electronics And Telecommunications Research Institute | Method and apparatus for image encoding/decoding |
US10630974B2 (en) * | 2017-05-30 | 2020-04-21 | Google Llc | Coding of intra-prediction modes |
US10986349B2 (en) | 2017-12-29 | 2021-04-20 | Microsoft Technology Licensing, Llc | Constraints on locations of reference blocks for intra block copy prediction |
US11146786B2 (en) | 2018-06-20 | 2021-10-12 | Beijing Bytedance Network Technology Co., Ltd. | Checking order of motion candidates in LUT |
US11146785B2 (en) | 2018-06-29 | 2021-10-12 | Beijing Bytedance Network Technology Co., Ltd. | Selection of coded motion information for LUT updating |
US11528501B2 (en) | 2018-06-29 | 2022-12-13 | Beijing Bytedance Network Technology Co., Ltd. | Interaction between LUT and AMVP |
US11140385B2 (en) | 2018-06-29 | 2021-10-05 | Beijing Bytedance Network Technology Co., Ltd. | Checking order of motion candidates in LUT |
WO2020003274A1 (en) * | 2018-06-29 | 2020-01-02 | Beijing Bytedance Network Technology Co., Ltd. | Checking order of motion candidates in lut |
US12058364B2 (en) | 2018-06-29 | 2024-08-06 | Beijing Bytedance Network Technology Co., Ltd. | Concept of using one or multiple look up tables to store motion information of previously coded in order and use them to code following blocks |
US12034914B2 (en) | 2018-06-29 | 2024-07-09 | Beijing Bytedance Network Technology Co., Ltd | Checking order of motion candidates in lut |
US11153557B2 (en) | 2018-06-29 | 2021-10-19 | Beijing Bytedance Network Technology Co., Ltd. | Which LUT to be updated or no updating |
US10778997B2 (en) | 2018-06-29 | 2020-09-15 | Beijing Bytedance Network Technology Co., Ltd. | Resetting of look up table per slice/tile/LCU row |
US11973971B2 (en) | 2018-06-29 | 2024-04-30 | Beijing Bytedance Network Technology Co., Ltd | Conditions for updating LUTs |
US11159807B2 (en) | 2018-06-29 | 2021-10-26 | Beijing Bytedance Network Technology Co., Ltd. | Number of motion candidates in a look up table to be checked according to mode |
US11909989B2 (en) | 2018-06-29 | 2024-02-20 | Beijing Bytedance Network Technology Co., Ltd | Number of motion candidates in a look up table to be checked according to mode |
US11159817B2 (en) | 2018-06-29 | 2021-10-26 | Beijing Bytedance Network Technology Co., Ltd. | Conditions for updating LUTS |
US11895318B2 (en) | 2018-06-29 | 2024-02-06 | Beijing Bytedance Network Technology Co., Ltd | Concept of using one or multiple look up tables to store motion information of previously coded in order and use them to code following blocks |
US11245892B2 (en) | 2018-06-29 | 2022-02-08 | Beijing Bytedance Network Technology Co., Ltd. | Checking order of motion candidates in LUT |
TWI728388B (en) * | 2018-06-29 | 2021-05-21 | 大陸商北京字節跳動網絡技術有限公司 | Checking order of motion candidates in look up table |
US11877002B2 (en) | 2018-06-29 | 2024-01-16 | Beijing Bytedance Network Technology Co., Ltd | Update of look up table: FIFO, constrained FIFO |
RU2774105C2 (en) * | 2018-06-29 | 2022-06-16 | Бейджин Байтдэнс Нетворк Текнолоджи Ко., Лтд. | Partial/full pruning when adding an hmvp candidate for merging/amvp |
US10873756B2 (en) | 2018-06-29 | 2020-12-22 | Beijing Bytedance Network Technology Co., Ltd. | Interaction between LUT and AMVP |
US11706406B2 (en) | 2018-06-29 | 2023-07-18 | Beijing Bytedance Network Technology Co., Ltd | Selection of coded motion information for LUT updating |
US11695921B2 (en) | 2018-06-29 | 2023-07-04 | Beijing Bytedance Network Technology Co., Ltd | Selection of coded motion information for LUT updating |
US11528500B2 (en) | 2018-06-29 | 2022-12-13 | Beijing Bytedance Network Technology Co., Ltd. | Partial/full pruning when adding a HMVP candidate to merge/AMVP |
US11134267B2 (en) | 2018-06-29 | 2021-09-28 | Beijing Bytedance Network Technology Co., Ltd. | Update of look up table: FIFO, constrained FIFO |
US11134244B2 (en) | 2018-07-02 | 2021-09-28 | Beijing Bytedance Network Technology Co., Ltd. | Order of rounding and pruning in LAMVR |
US11153558B2 (en) | 2018-07-02 | 2021-10-19 | Beijing Bytedance Network Technology Co., Ltd. | Update of look-up tables |
US11134243B2 (en) | 2018-07-02 | 2021-09-28 | Beijing Bytedance Network Technology Co., Ltd. | Rules on updating luts |
US11463685B2 (en) | 2018-07-02 | 2022-10-04 | Beijing Bytedance Network Technology Co., Ltd. | LUTS with intra prediction modes and intra mode prediction from non-adjacent blocks |
US11153559B2 (en) | 2018-07-02 | 2021-10-19 | Beijing Bytedance Network Technology Co., Ltd. | Usage of LUTs |
CN110876058A (en) * | 2018-08-30 | 2020-03-10 | 华为技术有限公司 | Historical candidate list updating method and device |
US20210297659A1 (en) | 2018-09-12 | 2021-09-23 | Beijing Bytedance Network Technology Co., Ltd. | Conditions for starting checking hmvp candidates depend on total number minus k |
US11159787B2 (en) | 2018-09-12 | 2021-10-26 | Beijing Bytedance Network Technology Co., Ltd. | Conditions for starting checking HMVP candidates depend on total number minus K |
US20200413044A1 (en) | 2018-09-12 | 2020-12-31 | Beijing Bytedance Network Technology Co., Ltd. | Conditions for starting checking hmvp candidates depend on total number minus k |
US11997253B2 (en) | 2018-09-12 | 2024-05-28 | Beijing Bytedance Network Technology Co., Ltd | Conditions for starting checking HMVP candidates depend on total number minus K |
US11589071B2 (en) | 2019-01-10 | 2023-02-21 | Beijing Bytedance Network Technology Co., Ltd. | Invoke of LUT updating |
US11909951B2 (en) | 2019-01-13 | 2024-02-20 | Beijing Bytedance Network Technology Co., Ltd | Interaction between lut and shared merge list |
US11140383B2 (en) | 2019-01-13 | 2021-10-05 | Beijing Bytedance Network Technology Co., Ltd. | Interaction between look up table and shared merge list |
US11956464B2 (en) | 2019-01-16 | 2024-04-09 | Beijing Bytedance Network Technology Co., Ltd | Inserting order of motion candidates in LUT |
US11962799B2 (en) | 2019-01-16 | 2024-04-16 | Beijing Bytedance Network Technology Co., Ltd | Motion candidates derivation |
US11438576B2 (en) * | 2019-03-08 | 2022-09-06 | Tencent America LLC | Merge list construction in triangular prediction |
US11973937B2 (en) * | 2019-03-08 | 2024-04-30 | Tencent America LLC | Signaling of maximum number of triangle merge candidates |
US20220385890A1 (en) * | 2019-03-08 | 2022-12-01 | Tencent America LLC | Signaling of maximum number of triangle merge candidates |
US11641483B2 (en) | 2019-03-22 | 2023-05-02 | Beijing Bytedance Network Technology Co., Ltd. | Interaction between merge list construction and other tools |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12034959B2 (en) | Method for generating prediction block in AMVP mode | |
KR101316060B1 (en) | Decoding method of inter coded moving picture | |
KR101430048B1 (en) | Apparatus for decoding a moving picture | |
KR20130067280A (en) | Decoding method of inter coded moving picture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: IBEX PT HOLDINGS CO., LTD., KOREA, REPUBLIC OF Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KIM, KWANGJE;OH, HYUNOH;REEL/FRAME:029842/0855 Effective date: 20130123 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |